Jan 13 20:34:39.951982 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:34:39.952007 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:34:39.952022 kernel: BIOS-provided physical RAM map: Jan 13 20:34:39.952030 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:34:39.952038 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:34:39.952046 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:34:39.952055 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:34:39.952063 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:34:39.952071 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:34:39.952082 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:34:39.952091 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:34:39.952098 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:34:39.952112 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:34:39.952120 kernel: NX (Execute Disable) protection: active Jan 13 20:34:39.952130 kernel: APIC: Static calls initialized Jan 13 20:34:39.952145 kernel: SMBIOS 2.8 present. Jan 13 20:34:39.952154 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:34:39.952162 kernel: Hypervisor detected: KVM Jan 13 20:34:39.952171 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:34:39.952179 kernel: kvm-clock: using sched offset of 2926278704 cycles Jan 13 20:34:39.952188 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:34:39.952197 kernel: tsc: Detected 2794.750 MHz processor Jan 13 20:34:39.952207 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:34:39.952216 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:34:39.952225 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:34:39.952237 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:34:39.952246 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:34:39.952255 kernel: Using GB pages for direct mapping Jan 13 20:34:39.952263 kernel: ACPI: Early table checksum verification disabled Jan 13 20:34:39.952272 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:34:39.952281 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:34:39.952290 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:34:39.952300 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:34:39.952308 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:34:39.952320 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:34:39.952329 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:34:39.952338 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:34:39.952347 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:34:39.952356 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:34:39.952365 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:34:39.952379 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:34:39.952391 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:34:39.952400 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:34:39.952410 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:34:39.952419 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:34:39.952430 kernel: No NUMA configuration found Jan 13 20:34:39.952440 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:34:39.952449 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:34:39.952461 kernel: Zone ranges: Jan 13 20:34:39.952471 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:34:39.952480 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:34:39.952489 kernel: Normal empty Jan 13 20:34:39.952506 kernel: Movable zone start for each node Jan 13 20:34:39.952515 kernel: Early memory node ranges Jan 13 20:34:39.952524 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:34:39.952533 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:34:39.952542 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:34:39.952555 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:34:39.952567 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:34:39.952576 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:34:39.952620 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:34:39.952629 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:34:39.952639 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:34:39.952648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:34:39.952657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:34:39.952667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:34:39.952680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:34:39.952689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:34:39.952698 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:34:39.952708 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:34:39.952717 kernel: TSC deadline timer available Jan 13 20:34:39.952726 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:34:39.952735 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:34:39.952745 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:34:39.952756 kernel: kvm-guest: setup PV sched yield Jan 13 20:34:39.952766 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:34:39.952778 kernel: Booting paravirtualized kernel on KVM Jan 13 20:34:39.952787 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:34:39.952797 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:34:39.952806 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:34:39.952815 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:34:39.952824 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:34:39.952833 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:34:39.952843 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:34:39.952853 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:34:39.952866 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:34:39.952875 kernel: random: crng init done Jan 13 20:34:39.952885 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:34:39.952894 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:34:39.952903 kernel: Fallback order for Node 0: 0 Jan 13 20:34:39.952913 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:34:39.952922 kernel: Policy zone: DMA32 Jan 13 20:34:39.952931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:34:39.952944 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 138948K reserved, 0K cma-reserved) Jan 13 20:34:39.952953 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:34:39.952962 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:34:39.952972 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:34:39.952981 kernel: Dynamic Preempt: voluntary Jan 13 20:34:39.952990 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:34:39.953000 kernel: rcu: RCU event tracing is enabled. Jan 13 20:34:39.953010 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:34:39.953019 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:34:39.953032 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:34:39.953041 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:34:39.953050 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:34:39.953063 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:34:39.953072 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:34:39.953093 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:34:39.953102 kernel: Console: colour VGA+ 80x25 Jan 13 20:34:39.953111 kernel: printk: console [ttyS0] enabled Jan 13 20:34:39.953121 kernel: ACPI: Core revision 20230628 Jan 13 20:34:39.953133 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:34:39.953143 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:34:39.953152 kernel: x2apic enabled Jan 13 20:34:39.953161 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:34:39.953171 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:34:39.953180 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:34:39.953190 kernel: kvm-guest: setup PV IPIs Jan 13 20:34:39.953211 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:34:39.953220 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:34:39.953243 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 13 20:34:39.953253 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:34:39.953263 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:34:39.953277 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:34:39.953287 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:34:39.953297 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:34:39.953307 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:34:39.953316 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:34:39.953329 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:34:39.953341 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:34:39.953351 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:34:39.953361 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:34:39.953371 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:34:39.953381 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:34:39.953391 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:34:39.953401 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:34:39.953413 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:34:39.953423 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:34:39.953433 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:34:39.953443 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:34:39.953452 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:34:39.953462 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:34:39.953472 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:34:39.953481 kernel: landlock: Up and running. Jan 13 20:34:39.953491 kernel: SELinux: Initializing. Jan 13 20:34:39.953511 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:34:39.953521 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:34:39.953530 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:34:39.953540 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:34:39.953550 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:34:39.953560 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:34:39.953570 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:34:39.953599 kernel: ... version: 0 Jan 13 20:34:39.953609 kernel: ... bit width: 48 Jan 13 20:34:39.953622 kernel: ... generic registers: 6 Jan 13 20:34:39.953632 kernel: ... value mask: 0000ffffffffffff Jan 13 20:34:39.953642 kernel: ... max period: 00007fffffffffff Jan 13 20:34:39.953651 kernel: ... fixed-purpose events: 0 Jan 13 20:34:39.953661 kernel: ... event mask: 000000000000003f Jan 13 20:34:39.953671 kernel: signal: max sigframe size: 1776 Jan 13 20:34:39.953680 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:34:39.953690 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:34:39.953700 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:34:39.953712 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:34:39.953722 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:34:39.953732 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:34:39.953741 kernel: smpboot: Max logical packages: 1 Jan 13 20:34:39.953751 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 13 20:34:39.953761 kernel: devtmpfs: initialized Jan 13 20:34:39.953771 kernel: x86/mm: Memory block size: 128MB Jan 13 20:34:39.953781 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:34:39.953791 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:34:39.953803 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:34:39.953813 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:34:39.953823 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:34:39.953833 kernel: audit: type=2000 audit(1736800480.024:1): state=initialized audit_enabled=0 res=1 Jan 13 20:34:39.953842 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:34:39.953852 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:34:39.953862 kernel: cpuidle: using governor menu Jan 13 20:34:39.953871 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:34:39.953881 kernel: dca service started, version 1.12.1 Jan 13 20:34:39.953894 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:34:39.953903 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:34:39.953913 kernel: PCI: Using configuration type 1 for base access Jan 13 20:34:39.953923 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:34:39.953933 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:34:39.953943 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:34:39.953952 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:34:39.953962 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:34:39.953972 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:34:39.953985 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:34:39.953994 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:34:39.954004 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:34:39.954014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:34:39.954023 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:34:39.954033 kernel: ACPI: Interpreter enabled Jan 13 20:34:39.954042 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:34:39.954052 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:34:39.954062 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:34:39.954074 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:34:39.954084 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:34:39.954094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:34:39.954325 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:34:39.954577 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:34:39.954741 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:34:39.954754 kernel: PCI host bridge to bus 0000:00 Jan 13 20:34:39.954905 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:34:39.955034 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:34:39.955162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:34:39.955291 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:34:39.955418 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:34:39.955557 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:34:39.955703 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:34:39.955922 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:34:39.956082 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:34:39.956227 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:34:39.956371 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:34:39.956523 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:34:39.956702 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:34:39.956858 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:34:39.957073 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:34:39.957249 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:34:39.957633 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:34:39.957827 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:34:39.957974 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:34:39.958114 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:34:39.958262 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:34:39.958425 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:34:39.958597 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:34:39.958817 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:34:39.959087 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:34:39.959238 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:34:39.959391 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:34:39.959553 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:34:39.959792 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:34:39.959976 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:34:39.960119 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:34:39.960267 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:34:39.960406 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:34:39.960419 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:34:39.960435 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:34:39.960445 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:34:39.960455 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:34:39.960464 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:34:39.960474 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:34:39.960484 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:34:39.960494 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:34:39.960513 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:34:39.960523 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:34:39.960537 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:34:39.960546 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:34:39.960556 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:34:39.960566 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:34:39.960575 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:34:39.960599 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:34:39.960622 kernel: iommu: Default domain type: Translated Jan 13 20:34:39.960632 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:34:39.960642 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:34:39.960656 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:34:39.960666 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:34:39.960675 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:34:39.960820 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:34:39.960958 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:34:39.961094 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:34:39.961107 kernel: vgaarb: loaded Jan 13 20:34:39.961117 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:34:39.961131 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:34:39.961141 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:34:39.961151 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:34:39.961161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:34:39.961171 kernel: pnp: PnP ACPI init Jan 13 20:34:39.961329 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:34:39.961343 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:34:39.961353 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:34:39.961368 kernel: NET: Registered PF_INET protocol family Jan 13 20:34:39.961377 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:34:39.961387 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:34:39.961397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:34:39.961407 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:34:39.961417 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:34:39.961427 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:34:39.961437 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:34:39.961447 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:34:39.961460 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:34:39.961470 kernel: NET: Registered PF_XDP protocol family Jan 13 20:34:39.961679 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:34:39.961811 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:34:39.961937 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:34:39.962062 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:34:39.962188 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:34:39.962313 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:34:39.962331 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:34:39.962341 kernel: Initialise system trusted keyrings Jan 13 20:34:39.962351 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:34:39.962361 kernel: Key type asymmetric registered Jan 13 20:34:39.962370 kernel: Asymmetric key parser 'x509' registered Jan 13 20:34:39.962380 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:34:39.962389 kernel: io scheduler mq-deadline registered Jan 13 20:34:39.962399 kernel: io scheduler kyber registered Jan 13 20:34:39.962409 kernel: io scheduler bfq registered Jan 13 20:34:39.962419 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:34:39.962432 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:34:39.962442 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:34:39.962452 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:34:39.962462 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:34:39.962472 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:34:39.962482 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:34:39.962492 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:34:39.962511 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:34:39.962686 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:34:39.962825 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:34:39.962838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:34:39.962965 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:34:39 UTC (1736800479) Jan 13 20:34:39.963094 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:34:39.963107 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:34:39.963116 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:34:39.963126 kernel: Segment Routing with IPv6 Jan 13 20:34:39.963140 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:34:39.963150 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:34:39.963159 kernel: Key type dns_resolver registered Jan 13 20:34:39.963169 kernel: IPI shorthand broadcast: enabled Jan 13 20:34:39.963179 kernel: sched_clock: Marking stable (862003280, 111521608)->(998301806, -24776918) Jan 13 20:34:39.963188 kernel: registered taskstats version 1 Jan 13 20:34:39.963198 kernel: Loading compiled-in X.509 certificates Jan 13 20:34:39.963208 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:34:39.963218 kernel: Key type .fscrypt registered Jan 13 20:34:39.963231 kernel: Key type fscrypt-provisioning registered Jan 13 20:34:39.963241 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:34:39.963251 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:34:39.963261 kernel: ima: No architecture policies found Jan 13 20:34:39.963270 kernel: clk: Disabling unused clocks Jan 13 20:34:39.963280 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:34:39.963289 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:34:39.963299 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:34:39.963309 kernel: Run /init as init process Jan 13 20:34:39.963321 kernel: with arguments: Jan 13 20:34:39.963331 kernel: /init Jan 13 20:34:39.963340 kernel: with environment: Jan 13 20:34:39.963350 kernel: HOME=/ Jan 13 20:34:39.963359 kernel: TERM=linux Jan 13 20:34:39.963369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:34:39.963381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:34:39.963393 systemd[1]: Detected virtualization kvm. Jan 13 20:34:39.963407 systemd[1]: Detected architecture x86-64. Jan 13 20:34:39.963417 systemd[1]: Running in initrd. Jan 13 20:34:39.963427 systemd[1]: No hostname configured, using default hostname. Jan 13 20:34:39.963437 systemd[1]: Hostname set to . Jan 13 20:34:39.963448 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:34:39.963458 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:34:39.963469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:34:39.963479 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:34:39.963493 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:34:39.963529 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:34:39.963543 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:34:39.963554 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:34:39.963566 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:34:39.963580 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:34:39.963637 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:34:39.963648 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:34:39.963659 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:34:39.963669 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:34:39.963680 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:34:39.963691 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:34:39.963701 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:34:39.963716 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:34:39.963727 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:34:39.963737 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:34:39.963748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:34:39.963759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:34:39.963770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:34:39.963780 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:34:39.963791 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:34:39.963802 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:34:39.963815 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:34:39.963826 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:34:39.963837 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:34:39.963847 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:34:39.963858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:34:39.963869 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:34:39.963879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:34:39.963912 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 20:34:39.963939 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:34:39.963954 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:34:39.963965 systemd-journald[192]: Journal started Jan 13 20:34:39.963991 systemd-journald[192]: Runtime Journal (/run/log/journal/52e131b12f7f44229bbc3e8fb2f6882f) is 6.0M, max 48.3M, 42.3M free. Jan 13 20:34:39.957724 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:34:39.998744 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:34:39.998775 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:34:39.998790 kernel: Bridge firewalling registered Jan 13 20:34:39.988732 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:34:39.999080 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:34:40.000390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:40.014040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:34:40.015246 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:34:40.016855 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:34:40.028029 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:34:40.029082 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:34:40.045970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:34:40.049046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:40.052024 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:34:40.055217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:34:40.068966 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:34:40.072250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:34:40.086839 dracut-cmdline[229]: dracut-dracut-053 Jan 13 20:34:40.090480 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:34:40.114004 systemd-resolved[232]: Positive Trust Anchors: Jan 13 20:34:40.114355 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:34:40.114404 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:34:40.117373 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 13 20:34:40.118801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:34:40.125981 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:34:40.212642 kernel: SCSI subsystem initialized Jan 13 20:34:40.222612 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:34:40.234615 kernel: iscsi: registered transport (tcp) Jan 13 20:34:40.256659 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:34:40.256749 kernel: QLogic iSCSI HBA Driver Jan 13 20:34:40.320873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:34:40.332792 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:34:40.360121 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:34:40.360207 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:34:40.360221 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:34:40.404636 kernel: raid6: avx2x4 gen() 29264 MB/s Jan 13 20:34:40.421620 kernel: raid6: avx2x2 gen() 30205 MB/s Jan 13 20:34:40.438765 kernel: raid6: avx2x1 gen() 24909 MB/s Jan 13 20:34:40.438840 kernel: raid6: using algorithm avx2x2 gen() 30205 MB/s Jan 13 20:34:40.456783 kernel: raid6: .... xor() 19697 MB/s, rmw enabled Jan 13 20:34:40.456851 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:34:40.481638 kernel: xor: automatically using best checksumming function avx Jan 13 20:34:40.633650 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:34:40.649692 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:34:40.661761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:34:40.675684 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 13 20:34:40.680738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:34:40.689754 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:34:40.704271 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 13 20:34:40.740405 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:34:40.751866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:34:40.832308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:34:40.844810 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:34:40.858835 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:34:40.861714 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:34:40.863365 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:34:40.866558 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:34:40.877803 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:34:40.884813 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:34:40.908006 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:34:40.908186 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:34:40.908212 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:34:40.908228 kernel: GPT:9289727 != 19775487 Jan 13 20:34:40.908243 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:34:40.908257 kernel: GPT:9289727 != 19775487 Jan 13 20:34:40.908271 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:34:40.908285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:34:40.906005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:34:40.906092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:40.914150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:34:40.918019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:34:40.919531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:40.923373 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:34:40.929667 kernel: libata version 3.00 loaded. Jan 13 20:34:40.940163 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:34:40.940249 kernel: AES CTR mode by8 optimization enabled Jan 13 20:34:40.940268 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:34:40.967393 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:34:40.967414 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:34:40.967611 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:34:40.967765 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (476) Jan 13 20:34:40.967788 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) Jan 13 20:34:40.967800 kernel: scsi host0: ahci Jan 13 20:34:40.967964 kernel: scsi host1: ahci Jan 13 20:34:40.968117 kernel: scsi host2: ahci Jan 13 20:34:40.968275 kernel: scsi host3: ahci Jan 13 20:34:40.968442 kernel: scsi host4: ahci Jan 13 20:34:40.968617 kernel: scsi host5: ahci Jan 13 20:34:40.968778 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:34:40.968790 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:34:40.968800 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:34:40.968811 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:34:40.968822 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:34:40.968836 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:34:40.941871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:34:40.944469 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:34:40.987277 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:34:41.008838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:41.015111 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:34:41.021238 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:34:41.022638 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:34:41.030976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:34:41.043924 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:34:41.047735 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:34:41.056059 disk-uuid[554]: Primary Header is updated. Jan 13 20:34:41.056059 disk-uuid[554]: Secondary Entries is updated. Jan 13 20:34:41.056059 disk-uuid[554]: Secondary Header is updated. Jan 13 20:34:41.060618 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:34:41.072320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:34:41.108075 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:41.279642 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:34:41.279769 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:34:41.281627 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:34:41.281661 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:34:41.283649 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:34:41.283677 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:34:41.284637 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:34:41.285752 kernel: ata3.00: applying bridge limits Jan 13 20:34:41.286622 kernel: ata3.00: configured for UDMA/100 Jan 13 20:34:41.288629 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:34:41.343656 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:34:41.366159 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:34:41.366188 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:34:42.080610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:34:42.081091 disk-uuid[556]: The operation has completed successfully. Jan 13 20:34:42.116657 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:34:42.116793 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:34:42.142752 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:34:42.146575 sh[593]: Success Jan 13 20:34:42.161673 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:34:42.200284 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:34:42.218554 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:34:42.224111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:34:42.234747 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:34:42.234777 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:42.234788 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:34:42.235777 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:34:42.237113 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:34:42.241538 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:34:42.244000 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:34:42.251701 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:34:42.253341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:34:42.263460 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:42.263495 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:42.263506 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:34:42.266659 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:34:42.275926 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:34:42.277794 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:42.367900 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:34:42.379731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:34:42.402694 systemd-networkd[771]: lo: Link UP Jan 13 20:34:42.402704 systemd-networkd[771]: lo: Gained carrier Jan 13 20:34:42.404429 systemd-networkd[771]: Enumeration completed Jan 13 20:34:42.404527 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:34:42.404866 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:34:42.404870 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:34:42.406983 systemd-networkd[771]: eth0: Link UP Jan 13 20:34:42.406989 systemd-networkd[771]: eth0: Gained carrier Jan 13 20:34:42.406998 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:34:42.408645 systemd[1]: Reached target network.target - Network. Jan 13 20:34:42.436667 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:34:42.486755 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:34:42.492904 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:34:42.551837 ignition[776]: Ignition 2.20.0 Jan 13 20:34:42.551850 ignition[776]: Stage: fetch-offline Jan 13 20:34:42.551894 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:42.551904 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:34:42.552023 ignition[776]: parsed url from cmdline: "" Jan 13 20:34:42.552028 ignition[776]: no config URL provided Jan 13 20:34:42.552033 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:34:42.552044 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:34:42.552079 ignition[776]: op(1): [started] loading QEMU firmware config module Jan 13 20:34:42.552084 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:34:42.561209 ignition[776]: op(1): [finished] loading QEMU firmware config module Jan 13 20:34:42.604686 ignition[776]: parsing config with SHA512: 0aad33f0958fe846bb6f8a3d5cb568a1cc47cd9a925579194dbe7235c1feb9a7074e4c7f38ec6630963452c6a5cff8678a41029da59f522dbf8512303373d7ea Jan 13 20:34:42.609824 unknown[776]: fetched base config from "system" Jan 13 20:34:42.610635 unknown[776]: fetched user config from "qemu" Jan 13 20:34:42.611231 ignition[776]: fetch-offline: fetch-offline passed Jan 13 20:34:42.611078 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.79 Jan 13 20:34:42.611379 ignition[776]: Ignition finished successfully Jan 13 20:34:42.611092 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Jan 13 20:34:42.614139 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:34:42.616574 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:34:42.624846 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:34:42.642398 ignition[788]: Ignition 2.20.0 Jan 13 20:34:42.642412 ignition[788]: Stage: kargs Jan 13 20:34:42.642656 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:42.642672 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:34:42.643837 ignition[788]: kargs: kargs passed Jan 13 20:34:42.643896 ignition[788]: Ignition finished successfully Jan 13 20:34:42.647328 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:34:42.661731 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:34:42.674460 ignition[796]: Ignition 2.20.0 Jan 13 20:34:42.674478 ignition[796]: Stage: disks Jan 13 20:34:42.674707 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:42.674723 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:34:42.679651 ignition[796]: disks: disks passed Jan 13 20:34:42.680458 ignition[796]: Ignition finished successfully Jan 13 20:34:42.683673 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:34:42.685463 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:34:42.687935 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:34:42.689649 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:34:42.692238 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:34:42.693571 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:34:42.709743 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:34:42.723854 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:34:42.730176 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:34:42.740734 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:34:42.825612 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:34:42.826285 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:34:42.828903 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:34:42.844824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:34:42.848051 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:34:42.850479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:34:42.850537 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:34:42.859480 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Jan 13 20:34:42.859509 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:42.859524 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:42.859539 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:34:42.850566 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:34:42.861839 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:34:42.863160 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:34:42.865156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:34:42.869037 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:34:42.909372 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:34:42.915345 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:34:42.920571 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:34:42.925928 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:34:43.020119 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:34:43.035693 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:34:43.038489 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:34:43.046623 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:43.066732 ignition[928]: INFO : Ignition 2.20.0 Jan 13 20:34:43.066732 ignition[928]: INFO : Stage: mount Jan 13 20:34:43.068450 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:43.068450 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:34:43.068450 ignition[928]: INFO : mount: mount passed Jan 13 20:34:43.068450 ignition[928]: INFO : Ignition finished successfully Jan 13 20:34:43.072385 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:34:43.074605 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:34:43.086674 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:34:43.234623 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:34:43.246752 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:34:43.258748 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 13 20:34:43.258798 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:43.259697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:43.259711 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:34:43.263609 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:34:43.265069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:34:43.288710 ignition[959]: INFO : Ignition 2.20.0 Jan 13 20:34:43.288710 ignition[959]: INFO : Stage: files Jan 13 20:34:43.290668 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:43.290668 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:34:43.290668 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:34:43.295018 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:34:43.295018 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:34:43.295018 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:34:43.295018 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:34:43.295018 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:34:43.294753 unknown[959]: wrote ssh authorized keys file for user: core Jan 13 20:34:43.305107 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:34:43.305107 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:34:43.327372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:34:43.567581 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:34:43.567581 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:34:43.572429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:34:43.941939 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:34:44.055829 systemd-networkd[771]: eth0: Gained IPv6LL Jan 13 20:34:44.286412 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:34:44.286412 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:34:44.290067 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:34:44.322810 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:34:44.329507 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:34:44.331144 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:34:44.331144 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:34:44.331144 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:34:44.331144 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:34:44.331144 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:34:44.331144 ignition[959]: INFO : files: files passed Jan 13 20:34:44.331144 ignition[959]: INFO : Ignition finished successfully Jan 13 20:34:44.342857 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:34:44.357783 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:34:44.360259 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:34:44.362935 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:34:44.363096 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:34:44.376383 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:34:44.379721 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:34:44.379721 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:34:44.383077 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:34:44.386948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:34:44.389741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:34:44.403930 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:34:44.440449 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:34:44.440654 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:34:44.442152 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:34:44.444513 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:34:44.448938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:34:44.450183 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:34:44.473614 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:34:44.480716 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:34:44.497131 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:34:44.498738 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:34:44.501499 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:34:44.502868 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:34:44.503044 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:34:44.505199 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:34:44.505610 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:34:44.506185 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:34:44.506574 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:34:44.507163 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:34:44.507555 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:34:44.508134 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:34:44.508957 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:34:44.509333 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:34:44.509906 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:34:44.510241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:34:44.510399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:34:44.533229 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:34:44.533437 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:34:44.535773 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:34:44.535953 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:34:44.539769 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:34:44.539938 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:34:44.543766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:34:44.543927 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:34:44.546304 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:34:44.548431 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:34:44.551943 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:34:44.552437 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:34:44.553000 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:34:44.553371 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:34:44.553536 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:34:44.561195 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:34:44.561291 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:34:44.563462 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:34:44.563607 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:34:44.565955 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:34:44.566075 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:34:44.581831 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:34:44.581946 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:34:44.582095 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:34:44.587543 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:34:44.589186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:34:44.589467 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:34:44.591973 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:34:44.592148 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:34:44.598517 ignition[1014]: INFO : Ignition 2.20.0 Jan 13 20:34:44.598517 ignition[1014]: INFO : Stage: umount Jan 13 20:34:44.602037 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:44.602037 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:34:44.602037 ignition[1014]: INFO : umount: umount passed Jan 13 20:34:44.602037 ignition[1014]: INFO : Ignition finished successfully Jan 13 20:34:44.599668 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:34:44.599833 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:34:44.602438 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:34:44.602551 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:34:44.607194 systemd[1]: Stopped target network.target - Network. Jan 13 20:34:44.609046 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:34:44.609172 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:34:44.611501 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:34:44.611556 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:34:44.612830 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:34:44.612881 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:34:44.615509 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:34:44.615602 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:34:44.618230 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:34:44.620698 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:34:44.623017 systemd-networkd[771]: eth0: DHCPv6 lease lost Jan 13 20:34:44.625079 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:34:44.625817 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:34:44.625973 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:34:44.627463 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:34:44.627695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:34:44.631769 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:34:44.631834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:34:44.639806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:34:44.641245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:34:44.641318 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:34:44.643905 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:34:44.643969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:34:44.646188 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:34:44.646252 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:34:44.648644 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:34:44.648706 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:34:44.651428 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:34:44.664177 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:34:44.664384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:34:44.667800 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:34:44.667920 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:34:44.673327 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:34:44.673395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:34:44.674709 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:34:44.674769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:34:44.677399 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:34:44.677454 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:34:44.679960 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:34:44.680012 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:44.683481 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:34:44.685203 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:34:44.685262 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:34:44.687860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:34:44.687914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:44.690782 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:34:44.690908 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:34:44.700105 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:34:44.700237 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:34:44.801246 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:34:44.801466 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:34:44.803903 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:34:44.805882 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:34:44.805969 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:34:44.818854 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:34:44.828360 systemd[1]: Switching root. Jan 13 20:34:44.860632 systemd-journald[192]: Journal stopped Jan 13 20:34:46.159336 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 20:34:46.159444 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:34:46.159473 kernel: SELinux: policy capability open_perms=1 Jan 13 20:34:46.159490 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:34:46.159513 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:34:46.159529 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:34:46.159545 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:34:46.159560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:34:46.159575 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:34:46.159618 kernel: audit: type=1403 audit(1736800485.202:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:34:46.159636 systemd[1]: Successfully loaded SELinux policy in 44.418ms. Jan 13 20:34:46.159667 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.146ms. Jan 13 20:34:46.159689 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:34:46.159707 systemd[1]: Detected virtualization kvm. Jan 13 20:34:46.159723 systemd[1]: Detected architecture x86-64. Jan 13 20:34:46.159740 systemd[1]: Detected first boot. Jan 13 20:34:46.159765 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:34:46.159781 zram_generator::config[1058]: No configuration found. Jan 13 20:34:46.159799 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:34:46.159815 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:34:46.159837 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:34:46.159860 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:34:46.159878 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:34:46.159898 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:34:46.159913 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:34:46.159929 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:34:46.159951 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:34:46.159967 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:34:46.161409 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:34:46.161436 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:34:46.161455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:34:46.161473 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:34:46.161491 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:34:46.161511 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:34:46.161529 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:34:46.161546 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:34:46.161564 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:34:46.161604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:34:46.161622 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:34:46.161637 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:34:46.161653 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:34:46.161669 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:34:46.161686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:34:46.161703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:34:46.161720 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:34:46.161742 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:34:46.161759 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:34:46.161776 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:34:46.161793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:34:46.161810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:34:46.161826 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:34:46.161843 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:34:46.161860 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:34:46.161877 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:34:46.161907 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:34:46.161924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:46.161941 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:34:46.161958 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:34:46.161974 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:34:46.161992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:34:46.162009 systemd[1]: Reached target machines.target - Containers. Jan 13 20:34:46.162026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:34:46.162047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:34:46.162063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:34:46.162080 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:34:46.162096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:34:46.162113 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:34:46.162130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:34:46.162146 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:34:46.162163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:34:46.162180 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:34:46.162201 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:34:46.162218 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:34:46.162235 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:34:46.162251 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:34:46.162268 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:34:46.162284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:34:46.162300 kernel: fuse: init (API version 7.39) Jan 13 20:34:46.162316 kernel: loop: module loaded Jan 13 20:34:46.162334 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:34:46.162368 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:34:46.162386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:34:46.162404 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:34:46.162420 systemd[1]: Stopped verity-setup.service. Jan 13 20:34:46.162437 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:46.162461 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:34:46.162478 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:34:46.162515 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:34:46.162536 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:34:46.162554 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:34:46.162570 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:34:46.162603 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:34:46.162620 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:34:46.162640 kernel: ACPI: bus type drm_connector registered Jan 13 20:34:46.162682 systemd-journald[1121]: Collecting audit messages is disabled. Jan 13 20:34:46.162714 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:34:46.162732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:34:46.162749 systemd-journald[1121]: Journal started Jan 13 20:34:46.162786 systemd-journald[1121]: Runtime Journal (/run/log/journal/52e131b12f7f44229bbc3e8fb2f6882f) is 6.0M, max 48.3M, 42.3M free. Jan 13 20:34:45.830366 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:34:45.848870 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:34:45.849385 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:34:46.164168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:34:46.167683 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:34:46.168577 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:34:46.168778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:34:46.170243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:34:46.170471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:34:46.172071 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:34:46.172248 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:34:46.173708 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:34:46.173878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:34:46.175340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:34:46.177104 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:34:46.178870 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:34:46.195496 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:34:46.207823 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:34:46.211915 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:34:46.213531 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:34:46.213683 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:34:46.216915 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:34:46.221803 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:34:46.225876 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:34:46.227537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:34:46.230944 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:34:46.235836 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:34:46.237712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:34:46.240707 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:34:46.242497 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:34:46.246800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:34:46.252855 systemd-journald[1121]: Time spent on flushing to /var/log/journal/52e131b12f7f44229bbc3e8fb2f6882f is 21.220ms for 949 entries. Jan 13 20:34:46.252855 systemd-journald[1121]: System Journal (/var/log/journal/52e131b12f7f44229bbc3e8fb2f6882f) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:34:46.294530 systemd-journald[1121]: Received client request to flush runtime journal. Jan 13 20:34:46.294630 kernel: loop0: detected capacity change from 0 to 210664 Jan 13 20:34:46.256398 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:34:46.262646 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:34:46.265951 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:34:46.267700 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:34:46.269375 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:34:46.293991 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:34:46.298273 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:34:46.298926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:34:46.303123 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:34:46.306015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:34:46.311788 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:34:46.323982 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:34:46.328268 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:34:46.343621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:34:46.370255 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:34:46.392198 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:34:46.434624 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:34:46.443905 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:34:46.522449 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 20:34:46.522478 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 20:34:46.528749 kernel: loop2: detected capacity change from 0 to 141000 Jan 13 20:34:46.531537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:34:46.535219 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:34:46.536328 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:34:46.583628 kernel: loop3: detected capacity change from 0 to 210664 Jan 13 20:34:46.593613 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:34:46.608614 kernel: loop5: detected capacity change from 0 to 141000 Jan 13 20:34:46.619751 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:34:46.620898 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 13 20:34:46.627479 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:34:46.627502 systemd[1]: Reloading... Jan 13 20:34:46.712865 zram_generator::config[1230]: No configuration found. Jan 13 20:34:46.826077 ldconfig[1166]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:34:46.846414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:34:46.897043 systemd[1]: Reloading finished in 268 ms. Jan 13 20:34:46.930646 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:34:46.958043 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:34:46.971838 systemd[1]: Starting ensure-sysext.service... Jan 13 20:34:46.974835 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:34:46.981488 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:34:46.981506 systemd[1]: Reloading... Jan 13 20:34:46.996508 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:34:46.996832 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:34:46.997892 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:34:46.998191 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:34:46.998291 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:34:47.003464 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:34:47.003476 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:34:47.022115 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:34:47.022727 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:34:47.058620 zram_generator::config[1288]: No configuration found. Jan 13 20:34:47.167221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:34:47.222866 systemd[1]: Reloading finished in 240 ms. Jan 13 20:34:47.244299 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:34:47.256442 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:34:47.279023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:34:47.282574 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:34:47.285830 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:34:47.291604 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:34:47.297954 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:34:47.302351 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:34:47.307126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:47.307382 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:34:47.309509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:34:47.317346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:34:47.324666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:34:47.326076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:34:47.333708 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:34:47.335061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:47.337837 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:34:47.339992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:34:47.340559 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:34:47.342457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:34:47.342665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:34:47.342996 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jan 13 20:34:47.344773 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:34:47.344977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:34:47.353973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:34:47.354267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:34:47.362966 augenrules[1361]: No rules Jan 13 20:34:47.362276 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:34:47.367873 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:34:47.368280 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:34:47.371100 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:34:47.376565 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:34:47.380179 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:34:47.385600 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:47.385964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:34:47.398999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:34:47.402913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:34:47.408368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:34:47.409816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:34:47.421974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:34:47.423129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:47.426042 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:34:47.427877 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:34:47.431093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:34:47.431284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:34:47.460640 systemd[1]: Finished ensure-sysext.service. Jan 13 20:34:47.462600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:34:47.462788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:34:47.468188 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:34:47.469742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:34:47.473466 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:34:47.476630 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1376) Jan 13 20:34:47.483234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:47.493833 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:34:47.495045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:34:47.500924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:34:47.503692 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:34:47.506208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:34:47.506293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:34:47.520842 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:34:47.522144 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:34:47.522193 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:47.525134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:34:47.526956 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:34:47.527317 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:34:47.532538 augenrules[1408]: /sbin/augenrules: No change Jan 13 20:34:47.533473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:34:47.539155 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:34:47.555721 systemd-networkd[1393]: lo: Link UP Jan 13 20:34:47.555736 systemd-networkd[1393]: lo: Gained carrier Jan 13 20:34:47.562191 augenrules[1433]: No rules Jan 13 20:34:47.558012 systemd-networkd[1393]: Enumeration completed Jan 13 20:34:47.561814 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:34:47.563283 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:34:47.563541 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:34:47.565269 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:34:47.565570 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:34:47.570110 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:34:47.570119 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:34:47.570646 systemd-resolved[1336]: Positive Trust Anchors: Jan 13 20:34:47.570662 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:34:47.570708 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:34:47.573871 systemd-networkd[1393]: eth0: Link UP Jan 13 20:34:47.573878 systemd-networkd[1393]: eth0: Gained carrier Jan 13 20:34:47.573906 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:34:47.580901 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:34:47.581564 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 13 20:34:47.584451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:34:47.586423 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:34:47.587763 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:34:47.588887 systemd[1]: Reached target network.target - Network. Jan 13 20:34:47.589960 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:34:47.597685 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:34:47.602112 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:34:48.974282 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:34:48.974532 systemd-resolved[1336]: Clock change detected. Flushing caches. Jan 13 20:34:48.974775 systemd-timesyncd[1419]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:34:48.975418 systemd-timesyncd[1419]: Initial clock synchronization to Mon 2025-01-13 20:34:48.974263 UTC. Jan 13 20:34:48.975468 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:34:48.988280 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:34:49.007303 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:34:49.008931 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:34:49.009137 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:34:49.041268 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:34:49.043373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:34:49.103745 kernel: kvm_amd: TSC scaling supported Jan 13 20:34:49.103861 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:34:49.103880 kernel: kvm_amd: Nested Paging enabled Jan 13 20:34:49.103893 kernel: kvm_amd: LBR virtualization supported Jan 13 20:34:49.105419 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:34:49.105451 kernel: kvm_amd: Virtual GIF supported Jan 13 20:34:49.133296 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:34:49.167774 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:34:49.178303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:49.189536 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:34:49.202083 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:34:49.234877 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:34:49.237335 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:34:49.238733 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:34:49.240147 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:34:49.241684 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:34:49.243726 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:34:49.245494 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:34:49.247075 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:34:49.248598 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:34:49.248648 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:34:49.249769 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:34:49.252326 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:34:49.255480 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:34:49.266928 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:34:49.270021 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:34:49.271994 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:34:49.273190 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:34:49.274218 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:34:49.275274 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:34:49.275309 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:34:49.276582 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:34:49.279158 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:34:49.284303 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:34:49.286348 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:34:49.287409 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:34:49.288827 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:34:49.293042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:34:49.297996 jq[1461]: false Jan 13 20:34:49.298412 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:34:49.301995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:34:49.303706 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:34:49.310465 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:34:49.312570 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:34:49.313233 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:34:49.314458 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:34:49.329569 extend-filesystems[1462]: Found loop3 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found loop4 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found loop5 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found sr0 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda1 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda2 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda3 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found usr Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda4 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda6 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda7 Jan 13 20:34:49.329569 extend-filesystems[1462]: Found vda9 Jan 13 20:34:49.329569 extend-filesystems[1462]: Checking size of /dev/vda9 Jan 13 20:34:49.317789 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:34:49.344734 dbus-daemon[1460]: [system] SELinux support is enabled Jan 13 20:34:49.323080 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:34:49.323419 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:34:49.369953 jq[1470]: true Jan 13 20:34:49.324852 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:34:49.326314 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:34:49.370303 jq[1480]: true Jan 13 20:34:49.331639 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:34:49.344924 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:34:49.352551 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:34:49.354125 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:34:49.354469 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:34:49.372292 tar[1474]: linux-amd64/helm Jan 13 20:34:49.381708 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:34:49.381754 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:34:49.383089 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:34:49.383177 update_engine[1469]: I20250113 20:34:49.383095 1469 main.cc:92] Flatcar Update Engine starting Jan 13 20:34:49.383191 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:34:49.390044 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:34:49.392233 extend-filesystems[1462]: Resized partition /dev/vda9 Jan 13 20:34:49.394299 update_engine[1469]: I20250113 20:34:49.390058 1469 update_check_scheduler.cc:74] Next update check in 3m13s Jan 13 20:34:49.397035 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:34:49.405542 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:34:49.406264 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:34:49.414297 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1396) Jan 13 20:34:49.453276 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:34:49.467739 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:34:49.477451 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:34:49.477487 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:34:49.478416 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:34:49.478416 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:34:49.478416 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:34:49.491383 extend-filesystems[1462]: Resized filesystem in /dev/vda9 Jan 13 20:34:49.479805 systemd-logind[1468]: New seat seat0. Jan 13 20:34:49.480850 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:34:49.485768 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:34:49.486021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:34:49.509365 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:34:49.510456 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:34:49.513845 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:34:49.534707 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:34:49.563403 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:34:49.572554 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:34:49.582992 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:34:49.583281 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:34:49.593512 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:34:49.606161 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:34:49.607231 containerd[1484]: time="2025-01-13T20:34:49.607082678Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:34:49.621585 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:34:49.624124 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:34:49.625931 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:34:49.640132 containerd[1484]: time="2025-01-13T20:34:49.639947129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:34:49.641933 containerd[1484]: time="2025-01-13T20:34:49.641900251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:34:49.641933 containerd[1484]: time="2025-01-13T20:34:49.641930468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:34:49.642034 containerd[1484]: time="2025-01-13T20:34:49.641946478Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:34:49.642205 containerd[1484]: time="2025-01-13T20:34:49.642168965Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:34:49.642205 containerd[1484]: time="2025-01-13T20:34:49.642193702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642299 containerd[1484]: time="2025-01-13T20:34:49.642280725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642324 containerd[1484]: time="2025-01-13T20:34:49.642298027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642537 containerd[1484]: time="2025-01-13T20:34:49.642516066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642537 containerd[1484]: time="2025-01-13T20:34:49.642534821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642590 containerd[1484]: time="2025-01-13T20:34:49.642547605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642590 containerd[1484]: time="2025-01-13T20:34:49.642558926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642676 containerd[1484]: time="2025-01-13T20:34:49.642657501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:34:49.642941 containerd[1484]: time="2025-01-13T20:34:49.642914663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:34:49.643065 containerd[1484]: time="2025-01-13T20:34:49.643044837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:34:49.643065 containerd[1484]: time="2025-01-13T20:34:49.643062581Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:34:49.643189 containerd[1484]: time="2025-01-13T20:34:49.643171144Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:34:49.643290 containerd[1484]: time="2025-01-13T20:34:49.643235184Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:34:49.650377 containerd[1484]: time="2025-01-13T20:34:49.650332121Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:34:49.650426 containerd[1484]: time="2025-01-13T20:34:49.650398686Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:34:49.650426 containerd[1484]: time="2025-01-13T20:34:49.650416199Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:34:49.650465 containerd[1484]: time="2025-01-13T20:34:49.650431688Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:34:49.650465 containerd[1484]: time="2025-01-13T20:34:49.650447157Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:34:49.650680 containerd[1484]: time="2025-01-13T20:34:49.650646591Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:34:49.652720 containerd[1484]: time="2025-01-13T20:34:49.652674253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:34:49.652863 containerd[1484]: time="2025-01-13T20:34:49.652834623Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:34:49.652863 containerd[1484]: time="2025-01-13T20:34:49.652855763Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:34:49.652910 containerd[1484]: time="2025-01-13T20:34:49.652869709Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:34:49.652910 containerd[1484]: time="2025-01-13T20:34:49.652885629Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.652910 containerd[1484]: time="2025-01-13T20:34:49.652898753Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.652971 containerd[1484]: time="2025-01-13T20:34:49.652911377Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.652971 containerd[1484]: time="2025-01-13T20:34:49.652925684Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.652971 containerd[1484]: time="2025-01-13T20:34:49.652939590Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.652971 containerd[1484]: time="2025-01-13T20:34:49.652957163Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.653049 containerd[1484]: time="2025-01-13T20:34:49.652979705Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.653049 containerd[1484]: time="2025-01-13T20:34:49.652991427Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:34:49.653049 containerd[1484]: time="2025-01-13T20:34:49.653013569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653049 containerd[1484]: time="2025-01-13T20:34:49.653027455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653049 containerd[1484]: time="2025-01-13T20:34:49.653040780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653053604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653065747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653078641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653089671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653101444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653117884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653133564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653149 containerd[1484]: time="2025-01-13T20:34:49.653145226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653308 containerd[1484]: time="2025-01-13T20:34:49.653156737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653308 containerd[1484]: time="2025-01-13T20:34:49.653168780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653308 containerd[1484]: time="2025-01-13T20:34:49.653184109Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:34:49.653308 containerd[1484]: time="2025-01-13T20:34:49.653203254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653308 containerd[1484]: time="2025-01-13T20:34:49.653215898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.653308 containerd[1484]: time="2025-01-13T20:34:49.653227680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:34:49.654045 containerd[1484]: time="2025-01-13T20:34:49.654011400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:34:49.654045 containerd[1484]: time="2025-01-13T20:34:49.654040524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:34:49.654153 containerd[1484]: time="2025-01-13T20:34:49.654126125Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:34:49.654153 containerd[1484]: time="2025-01-13T20:34:49.654145651Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:34:49.654199 containerd[1484]: time="2025-01-13T20:34:49.654156161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.654199 containerd[1484]: time="2025-01-13T20:34:49.654170508Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:34:49.654199 containerd[1484]: time="2025-01-13T20:34:49.654197759Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:34:49.654264 containerd[1484]: time="2025-01-13T20:34:49.654209832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:34:49.654625 containerd[1484]: time="2025-01-13T20:34:49.654550971Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:34:49.654625 containerd[1484]: time="2025-01-13T20:34:49.654616003Z" level=info msg="Connect containerd service" Jan 13 20:34:49.654812 containerd[1484]: time="2025-01-13T20:34:49.654664454Z" level=info msg="using legacy CRI server" Jan 13 20:34:49.654812 containerd[1484]: time="2025-01-13T20:34:49.654674403Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:34:49.654870 containerd[1484]: time="2025-01-13T20:34:49.654849982Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:34:49.655754 containerd[1484]: time="2025-01-13T20:34:49.655693604Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:34:49.656313 containerd[1484]: time="2025-01-13T20:34:49.655939926Z" level=info msg="Start subscribing containerd event" Jan 13 20:34:49.656313 containerd[1484]: time="2025-01-13T20:34:49.656026077Z" level=info msg="Start recovering state" Jan 13 20:34:49.656313 containerd[1484]: time="2025-01-13T20:34:49.656115295Z" level=info msg="Start event monitor" Jan 13 20:34:49.656313 containerd[1484]: time="2025-01-13T20:34:49.656132076Z" level=info msg="Start snapshots syncer" Jan 13 20:34:49.656313 containerd[1484]: time="2025-01-13T20:34:49.656142345Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:34:49.656313 containerd[1484]: time="2025-01-13T20:34:49.656150330Z" level=info msg="Start streaming server" Jan 13 20:34:49.656430 containerd[1484]: time="2025-01-13T20:34:49.656329566Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:34:49.656647 containerd[1484]: time="2025-01-13T20:34:49.656424184Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:34:49.657227 containerd[1484]: time="2025-01-13T20:34:49.657204246Z" level=info msg="containerd successfully booted in 0.052871s" Jan 13 20:34:49.657300 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:34:49.847509 tar[1474]: linux-amd64/LICENSE Jan 13 20:34:49.847509 tar[1474]: linux-amd64/README.md Jan 13 20:34:49.865386 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:34:50.930510 systemd-networkd[1393]: eth0: Gained IPv6LL Jan 13 20:34:50.935067 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:34:50.937113 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:34:50.951567 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:34:50.954862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:50.957410 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:34:50.981442 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:34:50.981707 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:34:50.983818 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:34:50.986258 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:34:51.596666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:51.598746 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:34:51.600869 systemd[1]: Startup finished in 1.013s (kernel) + 5.464s (initrd) + 5.069s (userspace) = 11.547s. Jan 13 20:34:51.610608 agetty[1545]: failed to open credentials directory Jan 13 20:34:51.627777 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:34:51.638816 agetty[1543]: failed to open credentials directory Jan 13 20:34:52.123833 kubelet[1573]: E0113 20:34:52.123723 1573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:34:52.128366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:34:52.128608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:34:52.129001 systemd[1]: kubelet.service: Consumed 1.020s CPU time. Jan 13 20:34:59.012761 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:34:59.014434 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:33606.service - OpenSSH per-connection server daemon (10.0.0.1:33606). Jan 13 20:34:59.075684 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 33606 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:34:59.077993 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:59.088322 systemd-logind[1468]: New session 1 of user core. Jan 13 20:34:59.089781 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:34:59.098592 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:34:59.112074 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:34:59.125733 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:34:59.128962 (systemd)[1591]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:34:59.235462 systemd[1591]: Queued start job for default target default.target. Jan 13 20:34:59.246884 systemd[1591]: Created slice app.slice - User Application Slice. Jan 13 20:34:59.246921 systemd[1591]: Reached target paths.target - Paths. Jan 13 20:34:59.246942 systemd[1591]: Reached target timers.target - Timers. Jan 13 20:34:59.249148 systemd[1591]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:34:59.264600 systemd[1591]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:34:59.264814 systemd[1591]: Reached target sockets.target - Sockets. Jan 13 20:34:59.264843 systemd[1591]: Reached target basic.target - Basic System. Jan 13 20:34:59.264907 systemd[1591]: Reached target default.target - Main User Target. Jan 13 20:34:59.264966 systemd[1591]: Startup finished in 128ms. Jan 13 20:34:59.265559 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:34:59.275549 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:34:59.338874 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:33610.service - OpenSSH per-connection server daemon (10.0.0.1:33610). Jan 13 20:34:59.386441 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 33610 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:34:59.388338 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:59.393085 systemd-logind[1468]: New session 2 of user core. Jan 13 20:34:59.404405 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:34:59.459986 sshd[1604]: Connection closed by 10.0.0.1 port 33610 Jan 13 20:34:59.460457 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:59.468915 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:33610.service: Deactivated successfully. Jan 13 20:34:59.471535 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:34:59.473864 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:34:59.485683 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:33618.service - OpenSSH per-connection server daemon (10.0.0.1:33618). Jan 13 20:34:59.486869 systemd-logind[1468]: Removed session 2. Jan 13 20:34:59.520521 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 33618 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:34:59.522383 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:59.527667 systemd-logind[1468]: New session 3 of user core. Jan 13 20:34:59.538392 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:34:59.589215 sshd[1611]: Connection closed by 10.0.0.1 port 33618 Jan 13 20:34:59.589698 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:59.597268 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:33618.service: Deactivated successfully. Jan 13 20:34:59.599194 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:34:59.601049 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:34:59.611689 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:33620.service - OpenSSH per-connection server daemon (10.0.0.1:33620). Jan 13 20:34:59.612896 systemd-logind[1468]: Removed session 3. Jan 13 20:34:59.646951 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 33620 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:34:59.649093 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:59.653527 systemd-logind[1468]: New session 4 of user core. Jan 13 20:34:59.668543 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:34:59.724867 sshd[1618]: Connection closed by 10.0.0.1 port 33620 Jan 13 20:34:59.725285 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:59.744313 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:33620.service: Deactivated successfully. Jan 13 20:34:59.746229 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:34:59.747913 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:34:59.757572 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:33628.service - OpenSSH per-connection server daemon (10.0.0.1:33628). Jan 13 20:34:59.758560 systemd-logind[1468]: Removed session 4. Jan 13 20:34:59.792231 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 33628 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:34:59.794144 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:59.798612 systemd-logind[1468]: New session 5 of user core. Jan 13 20:34:59.808360 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:34:59.868766 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:34:59.869138 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:34:59.886498 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:59.888540 sshd[1625]: Connection closed by 10.0.0.1 port 33628 Jan 13 20:34:59.889036 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:59.897638 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:33628.service: Deactivated successfully. Jan 13 20:34:59.899779 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:34:59.901546 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:34:59.903238 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:33644.service - OpenSSH per-connection server daemon (10.0.0.1:33644). Jan 13 20:34:59.904126 systemd-logind[1468]: Removed session 5. Jan 13 20:34:59.940351 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:34:59.941764 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:59.946176 systemd-logind[1468]: New session 6 of user core. Jan 13 20:34:59.961373 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:35:00.017222 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:35:00.017746 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:35:00.022198 sudo[1635]: pam_unix(sudo:session): session closed for user root Jan 13 20:35:00.030433 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:35:00.030835 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:35:00.054546 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:35:00.088845 augenrules[1657]: No rules Jan 13 20:35:00.091094 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:35:00.091397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:35:00.092950 sudo[1634]: pam_unix(sudo:session): session closed for user root Jan 13 20:35:00.094696 sshd[1633]: Connection closed by 10.0.0.1 port 33644 Jan 13 20:35:00.095042 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:00.111179 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:33644.service: Deactivated successfully. Jan 13 20:35:00.113419 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:35:00.114943 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:35:00.121497 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:33658.service - OpenSSH per-connection server daemon (10.0.0.1:33658). Jan 13 20:35:00.122782 systemd-logind[1468]: Removed session 6. Jan 13 20:35:00.154647 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 33658 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:35:00.156356 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:00.160764 systemd-logind[1468]: New session 7 of user core. Jan 13 20:35:00.171442 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:35:00.228376 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:35:00.228808 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:35:00.906683 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:35:00.907227 (dockerd)[1689]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:35:01.522852 dockerd[1689]: time="2025-01-13T20:35:01.522754650Z" level=info msg="Starting up" Jan 13 20:35:02.145066 dockerd[1689]: time="2025-01-13T20:35:02.144995171Z" level=info msg="Loading containers: start." Jan 13 20:35:02.242739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:35:02.248489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:02.457238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:02.463027 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:35:02.646212 kubelet[1784]: E0113 20:35:02.646067 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:35:02.655329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:35:02.655618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:35:02.840293 kernel: Initializing XFRM netlink socket Jan 13 20:35:02.931202 systemd-networkd[1393]: docker0: Link UP Jan 13 20:35:02.971019 dockerd[1689]: time="2025-01-13T20:35:02.970962429Z" level=info msg="Loading containers: done." Jan 13 20:35:02.987903 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck865394590-merged.mount: Deactivated successfully. Jan 13 20:35:02.991709 dockerd[1689]: time="2025-01-13T20:35:02.991643897Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:35:02.991810 dockerd[1689]: time="2025-01-13T20:35:02.991789390Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:35:02.991937 dockerd[1689]: time="2025-01-13T20:35:02.991917159Z" level=info msg="Daemon has completed initialization" Jan 13 20:35:03.037182 dockerd[1689]: time="2025-01-13T20:35:03.037092924Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:35:03.037372 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:35:03.997122 containerd[1484]: time="2025-01-13T20:35:03.997054973Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:35:04.742685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809159589.mount: Deactivated successfully. Jan 13 20:35:05.698271 containerd[1484]: time="2025-01-13T20:35:05.698172758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:05.701543 containerd[1484]: time="2025-01-13T20:35:05.701442018Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 20:35:05.702943 containerd[1484]: time="2025-01-13T20:35:05.702897136Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:05.705962 containerd[1484]: time="2025-01-13T20:35:05.705918912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:05.706980 containerd[1484]: time="2025-01-13T20:35:05.706900613Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.709791579s" Jan 13 20:35:05.706980 containerd[1484]: time="2025-01-13T20:35:05.706973008Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 20:35:05.731906 containerd[1484]: time="2025-01-13T20:35:05.731866253Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:35:07.299843 containerd[1484]: time="2025-01-13T20:35:07.299768520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:07.300675 containerd[1484]: time="2025-01-13T20:35:07.300641246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 20:35:07.301786 containerd[1484]: time="2025-01-13T20:35:07.301745016Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:07.304754 containerd[1484]: time="2025-01-13T20:35:07.304688455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:07.306162 containerd[1484]: time="2025-01-13T20:35:07.306100943Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.574199935s" Jan 13 20:35:07.306223 containerd[1484]: time="2025-01-13T20:35:07.306162338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 20:35:07.356055 containerd[1484]: time="2025-01-13T20:35:07.355980176Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:35:09.924120 containerd[1484]: time="2025-01-13T20:35:09.924062270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.924950 containerd[1484]: time="2025-01-13T20:35:09.924916511Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 20:35:09.926336 containerd[1484]: time="2025-01-13T20:35:09.926278395Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.930915 containerd[1484]: time="2025-01-13T20:35:09.930883730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.932098 containerd[1484]: time="2025-01-13T20:35:09.932005593Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 2.575985442s" Jan 13 20:35:09.932098 containerd[1484]: time="2025-01-13T20:35:09.932081516Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 20:35:09.956563 containerd[1484]: time="2025-01-13T20:35:09.956508867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:35:11.439715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963072647.mount: Deactivated successfully. Jan 13 20:35:12.580786 containerd[1484]: time="2025-01-13T20:35:12.580721675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:12.581839 containerd[1484]: time="2025-01-13T20:35:12.581802762Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 20:35:12.583437 containerd[1484]: time="2025-01-13T20:35:12.583366785Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:12.585932 containerd[1484]: time="2025-01-13T20:35:12.585892781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:12.586815 containerd[1484]: time="2025-01-13T20:35:12.586750590Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.630198702s" Jan 13 20:35:12.586815 containerd[1484]: time="2025-01-13T20:35:12.586810161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 20:35:12.620072 containerd[1484]: time="2025-01-13T20:35:12.619999632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:35:12.743153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:35:12.757536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:12.967762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:12.973608 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:35:13.302067 kubelet[2017]: E0113 20:35:13.301989 2017 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:35:13.307395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:35:13.307670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:35:13.642328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285246595.mount: Deactivated successfully. Jan 13 20:35:15.379100 containerd[1484]: time="2025-01-13T20:35:15.379002662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:15.379909 containerd[1484]: time="2025-01-13T20:35:15.379869236Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:35:15.381343 containerd[1484]: time="2025-01-13T20:35:15.381292796Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:15.384972 containerd[1484]: time="2025-01-13T20:35:15.384917973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:15.386100 containerd[1484]: time="2025-01-13T20:35:15.386060214Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.766008064s" Jan 13 20:35:15.386175 containerd[1484]: time="2025-01-13T20:35:15.386100630Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:35:15.417375 containerd[1484]: time="2025-01-13T20:35:15.417305810Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:35:16.555586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount442434615.mount: Deactivated successfully. Jan 13 20:35:16.561623 containerd[1484]: time="2025-01-13T20:35:16.561573101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:16.562429 containerd[1484]: time="2025-01-13T20:35:16.562348134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:35:16.563652 containerd[1484]: time="2025-01-13T20:35:16.563607235Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:16.565659 containerd[1484]: time="2025-01-13T20:35:16.565596605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:16.566354 containerd[1484]: time="2025-01-13T20:35:16.566293121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.148944811s" Jan 13 20:35:16.566354 containerd[1484]: time="2025-01-13T20:35:16.566339247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:35:16.588986 containerd[1484]: time="2025-01-13T20:35:16.588939573Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:35:17.316444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778387293.mount: Deactivated successfully. Jan 13 20:35:20.355187 containerd[1484]: time="2025-01-13T20:35:20.355111604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:20.355961 containerd[1484]: time="2025-01-13T20:35:20.355863885Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 20:35:20.357276 containerd[1484]: time="2025-01-13T20:35:20.357208396Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:20.360666 containerd[1484]: time="2025-01-13T20:35:20.360616897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:20.362099 containerd[1484]: time="2025-01-13T20:35:20.362061806Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.773079553s" Jan 13 20:35:20.362147 containerd[1484]: time="2025-01-13T20:35:20.362097743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 20:35:22.975573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:22.988501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:23.007133 systemd[1]: Reloading requested from client PID 2215 ('systemctl') (unit session-7.scope)... Jan 13 20:35:23.007153 systemd[1]: Reloading... Jan 13 20:35:23.094305 zram_generator::config[2257]: No configuration found. Jan 13 20:35:23.306644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:35:23.385571 systemd[1]: Reloading finished in 377 ms. Jan 13 20:35:23.442572 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:35:23.442673 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:35:23.442961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:23.444839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:23.609979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:23.616951 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:35:23.665643 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:23.665643 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:35:23.665643 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:23.666061 kubelet[2302]: I0113 20:35:23.665695 2302 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:35:23.992973 kubelet[2302]: I0113 20:35:23.992921 2302 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:35:23.992973 kubelet[2302]: I0113 20:35:23.992961 2302 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:35:23.993320 kubelet[2302]: I0113 20:35:23.993291 2302 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:35:24.013043 kubelet[2302]: I0113 20:35:24.012116 2302 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:35:24.014479 kubelet[2302]: E0113 20:35:24.014451 2302 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.031728 kubelet[2302]: I0113 20:35:24.031687 2302 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:35:24.033142 kubelet[2302]: I0113 20:35:24.033098 2302 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:35:24.033346 kubelet[2302]: I0113 20:35:24.033142 2302 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:35:24.033854 kubelet[2302]: I0113 20:35:24.033824 2302 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:35:24.033854 kubelet[2302]: I0113 20:35:24.033843 2302 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:35:24.034008 kubelet[2302]: I0113 20:35:24.033982 2302 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:24.034777 kubelet[2302]: I0113 20:35:24.034749 2302 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:35:24.034777 kubelet[2302]: I0113 20:35:24.034776 2302 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:35:24.034827 kubelet[2302]: I0113 20:35:24.034800 2302 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:35:24.034827 kubelet[2302]: I0113 20:35:24.034823 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:35:24.035792 kubelet[2302]: W0113 20:35:24.035738 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.035839 kubelet[2302]: E0113 20:35:24.035793 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.036985 kubelet[2302]: W0113 20:35:24.036949 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.036985 kubelet[2302]: E0113 20:35:24.036983 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.038748 kubelet[2302]: I0113 20:35:24.038718 2302 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:35:24.040424 kubelet[2302]: I0113 20:35:24.040143 2302 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:35:24.040424 kubelet[2302]: W0113 20:35:24.040207 2302 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:35:24.041491 kubelet[2302]: I0113 20:35:24.041073 2302 server.go:1264] "Started kubelet" Jan 13 20:35:24.041491 kubelet[2302]: I0113 20:35:24.041155 2302 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:35:24.041491 kubelet[2302]: I0113 20:35:24.041393 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:35:24.041751 kubelet[2302]: I0113 20:35:24.041728 2302 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:35:24.043956 kubelet[2302]: I0113 20:35:24.043878 2302 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:35:24.044211 kubelet[2302]: I0113 20:35:24.044179 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:35:24.046936 kubelet[2302]: E0113 20:35:24.046895 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:24.046988 kubelet[2302]: I0113 20:35:24.046949 2302 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:35:24.047102 kubelet[2302]: I0113 20:35:24.047077 2302 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:35:24.047192 kubelet[2302]: I0113 20:35:24.047154 2302 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:35:24.047695 kubelet[2302]: W0113 20:35:24.047617 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.047695 kubelet[2302]: E0113 20:35:24.047675 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.048603 kubelet[2302]: E0113 20:35:24.048566 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jan 13 20:35:24.048899 kubelet[2302]: E0113 20:35:24.048876 2302 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:35:24.049128 kubelet[2302]: I0113 20:35:24.049106 2302 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:35:24.049840 kubelet[2302]: I0113 20:35:24.049209 2302 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:35:24.050497 kubelet[2302]: I0113 20:35:24.050469 2302 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:35:24.050693 kubelet[2302]: E0113 20:35:24.050211 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5adfe92dc248 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:35:24.041044552 +0000 UTC m=+0.418965758,LastTimestamp:2025-01-13 20:35:24.041044552 +0000 UTC m=+0.418965758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:35:24.065922 kubelet[2302]: I0113 20:35:24.065889 2302 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:35:24.065922 kubelet[2302]: I0113 20:35:24.065909 2302 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:35:24.065922 kubelet[2302]: I0113 20:35:24.065926 2302 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:24.071520 kubelet[2302]: I0113 20:35:24.071482 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:35:24.073205 kubelet[2302]: I0113 20:35:24.073142 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:35:24.073329 kubelet[2302]: I0113 20:35:24.073210 2302 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:35:24.073392 kubelet[2302]: I0113 20:35:24.073367 2302 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:35:24.073486 kubelet[2302]: E0113 20:35:24.073436 2302 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:35:24.073807 kubelet[2302]: W0113 20:35:24.073751 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.073807 kubelet[2302]: E0113 20:35:24.073808 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:24.148822 kubelet[2302]: I0113 20:35:24.148760 2302 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:35:24.149225 kubelet[2302]: E0113 20:35:24.149179 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:35:24.174491 kubelet[2302]: E0113 20:35:24.174432 2302 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:35:24.249730 kubelet[2302]: E0113 20:35:24.249575 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jan 13 20:35:24.339059 kubelet[2302]: I0113 20:35:24.338993 2302 policy_none.go:49] "None policy: Start" Jan 13 20:35:24.339952 kubelet[2302]: I0113 20:35:24.339910 2302 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:35:24.339952 kubelet[2302]: I0113 20:35:24.339937 2302 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:35:24.350099 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:35:24.350962 kubelet[2302]: I0113 20:35:24.350917 2302 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:35:24.351347 kubelet[2302]: E0113 20:35:24.351312 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:35:24.369363 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:35:24.372724 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:35:24.375553 kubelet[2302]: E0113 20:35:24.375511 2302 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:35:24.382311 kubelet[2302]: I0113 20:35:24.382268 2302 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:35:24.382589 kubelet[2302]: I0113 20:35:24.382529 2302 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:35:24.383357 kubelet[2302]: I0113 20:35:24.382654 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:35:24.383492 kubelet[2302]: E0113 20:35:24.383477 2302 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:35:24.650941 kubelet[2302]: E0113 20:35:24.650779 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jan 13 20:35:24.753718 kubelet[2302]: I0113 20:35:24.753668 2302 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:35:24.754288 kubelet[2302]: E0113 20:35:24.754114 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:35:24.776387 kubelet[2302]: I0113 20:35:24.776322 2302 topology_manager.go:215] "Topology Admit Handler" podUID="547611c7b25bf9e97421668b531ff012" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:35:24.777787 kubelet[2302]: I0113 20:35:24.777736 2302 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:35:24.778894 kubelet[2302]: I0113 20:35:24.778845 2302 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:35:24.784547 systemd[1]: Created slice kubepods-burstable-pod547611c7b25bf9e97421668b531ff012.slice - libcontainer container kubepods-burstable-pod547611c7b25bf9e97421668b531ff012.slice. Jan 13 20:35:24.808397 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 20:35:24.823615 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 20:35:24.851664 kubelet[2302]: I0113 20:35:24.851606 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/547611c7b25bf9e97421668b531ff012-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"547611c7b25bf9e97421668b531ff012\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:35:24.851664 kubelet[2302]: I0113 20:35:24.851655 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/547611c7b25bf9e97421668b531ff012-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"547611c7b25bf9e97421668b531ff012\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:35:24.851822 kubelet[2302]: I0113 20:35:24.851680 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:24.851822 kubelet[2302]: I0113 20:35:24.851703 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:35:24.851822 kubelet[2302]: I0113 20:35:24.851723 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/547611c7b25bf9e97421668b531ff012-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"547611c7b25bf9e97421668b531ff012\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:35:24.851822 kubelet[2302]: I0113 20:35:24.851742 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:24.851822 kubelet[2302]: I0113 20:35:24.851761 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:24.851975 kubelet[2302]: I0113 20:35:24.851780 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:24.851975 kubelet[2302]: I0113 20:35:24.851797 2302 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:25.105834 kubelet[2302]: E0113 20:35:25.105778 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:25.106429 containerd[1484]: time="2025-01-13T20:35:25.106383772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:547611c7b25bf9e97421668b531ff012,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:25.121645 kubelet[2302]: E0113 20:35:25.121605 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:25.121981 containerd[1484]: time="2025-01-13T20:35:25.121948320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:25.126264 kubelet[2302]: E0113 20:35:25.126223 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:25.126554 containerd[1484]: time="2025-01-13T20:35:25.126525381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:25.251805 kubelet[2302]: W0113 20:35:25.251725 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.251805 kubelet[2302]: E0113 20:35:25.251800 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.277409 kubelet[2302]: W0113 20:35:25.277348 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.277409 kubelet[2302]: E0113 20:35:25.277408 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.434574 kubelet[2302]: W0113 20:35:25.434418 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.434574 kubelet[2302]: E0113 20:35:25.434487 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.451914 kubelet[2302]: E0113 20:35:25.451876 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Jan 13 20:35:25.451914 kubelet[2302]: W0113 20:35:25.451892 2302 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.452006 kubelet[2302]: E0113 20:35:25.451920 2302 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jan 13 20:35:25.556133 kubelet[2302]: I0113 20:35:25.556081 2302 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:35:25.556494 kubelet[2302]: E0113 20:35:25.556454 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 13 20:35:25.644099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535709345.mount: Deactivated successfully. Jan 13 20:35:25.651297 containerd[1484]: time="2025-01-13T20:35:25.651220667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:25.652019 containerd[1484]: time="2025-01-13T20:35:25.651954948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:35:25.654877 containerd[1484]: time="2025-01-13T20:35:25.654819181Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:25.656716 containerd[1484]: time="2025-01-13T20:35:25.656674550Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:25.657616 containerd[1484]: time="2025-01-13T20:35:25.657569407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:35:25.658665 containerd[1484]: time="2025-01-13T20:35:25.658624349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:25.659808 containerd[1484]: time="2025-01-13T20:35:25.659745478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:25.660682 containerd[1484]: time="2025-01-13T20:35:25.660649792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.165188ms" Jan 13 20:35:25.660819 containerd[1484]: time="2025-01-13T20:35:25.660772877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:35:25.663922 containerd[1484]: time="2025-01-13T20:35:25.663887838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.297504ms" Jan 13 20:35:25.666123 containerd[1484]: time="2025-01-13T20:35:25.666083617Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.059992ms" Jan 13 20:35:25.825957 containerd[1484]: time="2025-01-13T20:35:25.825832810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:25.826218 containerd[1484]: time="2025-01-13T20:35:25.825992344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:25.826218 containerd[1484]: time="2025-01-13T20:35:25.826009307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:25.826218 containerd[1484]: time="2025-01-13T20:35:25.826118695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:25.827170 containerd[1484]: time="2025-01-13T20:35:25.824603846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:25.827460 containerd[1484]: time="2025-01-13T20:35:25.827199437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:25.827460 containerd[1484]: time="2025-01-13T20:35:25.827219085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:25.827460 containerd[1484]: time="2025-01-13T20:35:25.827378368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:25.828161 containerd[1484]: time="2025-01-13T20:35:25.828037705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:25.828161 containerd[1484]: time="2025-01-13T20:35:25.828105766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:25.828161 containerd[1484]: time="2025-01-13T20:35:25.828122497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:25.828445 containerd[1484]: time="2025-01-13T20:35:25.828196038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:25.850473 systemd[1]: Started cri-containerd-4e124caaefad4d6cb4ec64293406a5efcb55faaa924a8106187b2fa5cbede848.scope - libcontainer container 4e124caaefad4d6cb4ec64293406a5efcb55faaa924a8106187b2fa5cbede848. Jan 13 20:35:25.854739 systemd[1]: Started cri-containerd-88b6a716585318bdddb288a750c7f769ec005f7f73cf99ad62b1dbe5c151bf68.scope - libcontainer container 88b6a716585318bdddb288a750c7f769ec005f7f73cf99ad62b1dbe5c151bf68. Jan 13 20:35:25.876372 systemd[1]: Started cri-containerd-3fe4aad831a3d1eeebe535e48effd746c0966f7a3875c20189c49e800bfd972c.scope - libcontainer container 3fe4aad831a3d1eeebe535e48effd746c0966f7a3875c20189c49e800bfd972c. Jan 13 20:35:25.924493 containerd[1484]: time="2025-01-13T20:35:25.924440663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"88b6a716585318bdddb288a750c7f769ec005f7f73cf99ad62b1dbe5c151bf68\"" Jan 13 20:35:25.925743 kubelet[2302]: E0113 20:35:25.925670 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:25.928810 containerd[1484]: time="2025-01-13T20:35:25.928773859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e124caaefad4d6cb4ec64293406a5efcb55faaa924a8106187b2fa5cbede848\"" Jan 13 20:35:25.932941 containerd[1484]: time="2025-01-13T20:35:25.928989790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:547611c7b25bf9e97421668b531ff012,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fe4aad831a3d1eeebe535e48effd746c0966f7a3875c20189c49e800bfd972c\"" Jan 13 20:35:25.933168 containerd[1484]: time="2025-01-13T20:35:25.933132883Z" level=info msg="CreateContainer within sandbox \"88b6a716585318bdddb288a750c7f769ec005f7f73cf99ad62b1dbe5c151bf68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:35:25.933722 kubelet[2302]: E0113 20:35:25.933695 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:25.934085 kubelet[2302]: E0113 20:35:25.933867 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:25.937469 containerd[1484]: time="2025-01-13T20:35:25.937435580Z" level=info msg="CreateContainer within sandbox \"4e124caaefad4d6cb4ec64293406a5efcb55faaa924a8106187b2fa5cbede848\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:35:25.937586 containerd[1484]: time="2025-01-13T20:35:25.937559686Z" level=info msg="CreateContainer within sandbox \"3fe4aad831a3d1eeebe535e48effd746c0966f7a3875c20189c49e800bfd972c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:35:25.969377 containerd[1484]: time="2025-01-13T20:35:25.969320317Z" level=info msg="CreateContainer within sandbox \"3fe4aad831a3d1eeebe535e48effd746c0966f7a3875c20189c49e800bfd972c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"80b38f8a738689ac222bacf69d1231ed9cf0123a1ee584f0d19e72167eced6f1\"" Jan 13 20:35:25.970177 containerd[1484]: time="2025-01-13T20:35:25.970064276Z" level=info msg="StartContainer for \"80b38f8a738689ac222bacf69d1231ed9cf0123a1ee584f0d19e72167eced6f1\"" Jan 13 20:35:25.971863 containerd[1484]: time="2025-01-13T20:35:25.971827358Z" level=info msg="CreateContainer within sandbox \"88b6a716585318bdddb288a750c7f769ec005f7f73cf99ad62b1dbe5c151bf68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7352523891fec3670520948526ad551974038d83f648c2f14337626a15e32600\"" Jan 13 20:35:25.972586 containerd[1484]: time="2025-01-13T20:35:25.972563763Z" level=info msg="StartContainer for \"7352523891fec3670520948526ad551974038d83f648c2f14337626a15e32600\"" Jan 13 20:35:25.974028 containerd[1484]: time="2025-01-13T20:35:25.973969785Z" level=info msg="CreateContainer within sandbox \"4e124caaefad4d6cb4ec64293406a5efcb55faaa924a8106187b2fa5cbede848\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"869e814633a67cc5813d1561b49c0e74fd356040aae8c3a18bbc79ff7c7a9680\"" Jan 13 20:35:25.975378 containerd[1484]: time="2025-01-13T20:35:25.974500357Z" level=info msg="StartContainer for \"869e814633a67cc5813d1561b49c0e74fd356040aae8c3a18bbc79ff7c7a9680\"" Jan 13 20:35:26.000416 systemd[1]: Started cri-containerd-80b38f8a738689ac222bacf69d1231ed9cf0123a1ee584f0d19e72167eced6f1.scope - libcontainer container 80b38f8a738689ac222bacf69d1231ed9cf0123a1ee584f0d19e72167eced6f1. Jan 13 20:35:26.004698 systemd[1]: Started cri-containerd-7352523891fec3670520948526ad551974038d83f648c2f14337626a15e32600.scope - libcontainer container 7352523891fec3670520948526ad551974038d83f648c2f14337626a15e32600. Jan 13 20:35:26.006442 systemd[1]: Started cri-containerd-869e814633a67cc5813d1561b49c0e74fd356040aae8c3a18bbc79ff7c7a9680.scope - libcontainer container 869e814633a67cc5813d1561b49c0e74fd356040aae8c3a18bbc79ff7c7a9680. Jan 13 20:35:26.053779 containerd[1484]: time="2025-01-13T20:35:26.053705425Z" level=info msg="StartContainer for \"869e814633a67cc5813d1561b49c0e74fd356040aae8c3a18bbc79ff7c7a9680\" returns successfully" Jan 13 20:35:26.060360 containerd[1484]: time="2025-01-13T20:35:26.060145636Z" level=info msg="StartContainer for \"80b38f8a738689ac222bacf69d1231ed9cf0123a1ee584f0d19e72167eced6f1\" returns successfully" Jan 13 20:35:26.063770 containerd[1484]: time="2025-01-13T20:35:26.063745656Z" level=info msg="StartContainer for \"7352523891fec3670520948526ad551974038d83f648c2f14337626a15e32600\" returns successfully" Jan 13 20:35:26.083881 kubelet[2302]: E0113 20:35:26.083745 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:26.085870 kubelet[2302]: E0113 20:35:26.085573 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:26.087174 kubelet[2302]: E0113 20:35:26.087084 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:27.090031 kubelet[2302]: E0113 20:35:27.089991 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:27.090563 kubelet[2302]: E0113 20:35:27.090513 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:27.157910 kubelet[2302]: I0113 20:35:27.157870 2302 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:35:27.596491 kubelet[2302]: E0113 20:35:27.596199 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:35:27.685331 kubelet[2302]: I0113 20:35:27.685278 2302 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:35:27.691689 kubelet[2302]: E0113 20:35:27.691656 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:27.792393 kubelet[2302]: E0113 20:35:27.792341 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:27.893071 kubelet[2302]: E0113 20:35:27.892929 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:27.994053 kubelet[2302]: E0113 20:35:27.993998 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:28.091683 kubelet[2302]: E0113 20:35:28.091625 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:28.094378 kubelet[2302]: E0113 20:35:28.094345 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:28.150718 kubelet[2302]: E0113 20:35:28.150602 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:28.194439 kubelet[2302]: E0113 20:35:28.194401 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:28.295171 kubelet[2302]: E0113 20:35:28.295103 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:28.395942 kubelet[2302]: E0113 20:35:28.395869 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:35:29.037830 kubelet[2302]: I0113 20:35:29.037778 2302 apiserver.go:52] "Watching apiserver" Jan 13 20:35:29.048289 kubelet[2302]: I0113 20:35:29.048222 2302 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:35:30.553486 systemd[1]: Reloading requested from client PID 2583 ('systemctl') (unit session-7.scope)... Jan 13 20:35:30.553512 systemd[1]: Reloading... Jan 13 20:35:30.663276 zram_generator::config[2625]: No configuration found. Jan 13 20:35:30.779518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:35:30.873071 systemd[1]: Reloading finished in 319 ms. Jan 13 20:35:30.920807 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:30.937682 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:35:30.937955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:30.953496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:31.105829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:31.112047 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:35:31.150921 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:31.150921 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:35:31.150921 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:31.150921 kubelet[2667]: I0113 20:35:31.150622 2667 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:35:31.229460 kubelet[2667]: I0113 20:35:31.229414 2667 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:35:31.229460 kubelet[2667]: I0113 20:35:31.229445 2667 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:35:31.229728 kubelet[2667]: I0113 20:35:31.229696 2667 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:35:31.230969 kubelet[2667]: I0113 20:35:31.230929 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:35:31.232464 kubelet[2667]: I0113 20:35:31.232367 2667 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:35:31.244283 kubelet[2667]: I0113 20:35:31.244229 2667 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:35:31.244545 kubelet[2667]: I0113 20:35:31.244504 2667 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:35:31.244708 kubelet[2667]: I0113 20:35:31.244538 2667 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:35:31.244804 kubelet[2667]: I0113 20:35:31.244718 2667 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:35:31.244804 kubelet[2667]: I0113 20:35:31.244729 2667 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:35:31.244804 kubelet[2667]: I0113 20:35:31.244780 2667 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:31.244901 kubelet[2667]: I0113 20:35:31.244887 2667 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:35:31.244934 kubelet[2667]: I0113 20:35:31.244904 2667 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:35:31.244934 kubelet[2667]: I0113 20:35:31.244926 2667 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:35:31.244979 kubelet[2667]: I0113 20:35:31.244946 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:35:31.247209 kubelet[2667]: I0113 20:35:31.247076 2667 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:35:31.247681 kubelet[2667]: I0113 20:35:31.247626 2667 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:35:31.249017 kubelet[2667]: I0113 20:35:31.248855 2667 server.go:1264] "Started kubelet" Jan 13 20:35:31.249207 kubelet[2667]: I0113 20:35:31.249145 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:35:31.251416 kubelet[2667]: I0113 20:35:31.251399 2667 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:35:31.252365 kubelet[2667]: I0113 20:35:31.251565 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:35:31.252365 kubelet[2667]: I0113 20:35:31.251915 2667 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:35:31.254589 kubelet[2667]: I0113 20:35:31.254569 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:35:31.255874 kubelet[2667]: I0113 20:35:31.255841 2667 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:35:31.256032 kubelet[2667]: I0113 20:35:31.256005 2667 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:35:31.256222 kubelet[2667]: I0113 20:35:31.256201 2667 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:35:31.261577 kubelet[2667]: I0113 20:35:31.261527 2667 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:35:31.261577 kubelet[2667]: I0113 20:35:31.261562 2667 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:35:31.261744 kubelet[2667]: I0113 20:35:31.261706 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:35:31.263087 kubelet[2667]: E0113 20:35:31.263048 2667 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:35:31.270169 kubelet[2667]: I0113 20:35:31.270080 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:35:31.271717 kubelet[2667]: I0113 20:35:31.271691 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:35:31.272146 kubelet[2667]: I0113 20:35:31.271812 2667 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:35:31.272146 kubelet[2667]: I0113 20:35:31.271838 2667 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:35:31.272146 kubelet[2667]: E0113 20:35:31.271888 2667 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:35:31.308054 kubelet[2667]: I0113 20:35:31.308003 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:35:31.308054 kubelet[2667]: I0113 20:35:31.308028 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:35:31.308054 kubelet[2667]: I0113 20:35:31.308049 2667 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:31.308349 kubelet[2667]: I0113 20:35:31.308202 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:35:31.308349 kubelet[2667]: I0113 20:35:31.308213 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:35:31.308349 kubelet[2667]: I0113 20:35:31.308231 2667 policy_none.go:49] "None policy: Start" Jan 13 20:35:31.309147 kubelet[2667]: I0113 20:35:31.309109 2667 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:35:31.309188 kubelet[2667]: I0113 20:35:31.309155 2667 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:35:31.309409 kubelet[2667]: I0113 20:35:31.309380 2667 state_mem.go:75] "Updated machine memory state" Jan 13 20:35:31.315494 kubelet[2667]: I0113 20:35:31.315345 2667 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:35:31.315722 kubelet[2667]: I0113 20:35:31.315592 2667 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:35:31.315722 kubelet[2667]: I0113 20:35:31.315714 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:35:31.361575 kubelet[2667]: I0113 20:35:31.361482 2667 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:35:31.370422 kubelet[2667]: I0113 20:35:31.370373 2667 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:35:31.370571 kubelet[2667]: I0113 20:35:31.370499 2667 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:35:31.372124 kubelet[2667]: I0113 20:35:31.372028 2667 topology_manager.go:215] "Topology Admit Handler" podUID="547611c7b25bf9e97421668b531ff012" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:35:31.372124 kubelet[2667]: I0113 20:35:31.372149 2667 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:35:31.372501 kubelet[2667]: I0113 20:35:31.372206 2667 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:35:31.457599 kubelet[2667]: I0113 20:35:31.457347 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:35:31.557735 kubelet[2667]: I0113 20:35:31.557662 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/547611c7b25bf9e97421668b531ff012-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"547611c7b25bf9e97421668b531ff012\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:35:31.557735 kubelet[2667]: I0113 20:35:31.557737 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/547611c7b25bf9e97421668b531ff012-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"547611c7b25bf9e97421668b531ff012\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:35:31.557957 kubelet[2667]: I0113 20:35:31.557771 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:31.557957 kubelet[2667]: I0113 20:35:31.557816 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/547611c7b25bf9e97421668b531ff012-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"547611c7b25bf9e97421668b531ff012\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:35:31.557957 kubelet[2667]: I0113 20:35:31.557842 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:31.557957 kubelet[2667]: I0113 20:35:31.557884 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:31.557957 kubelet[2667]: I0113 20:35:31.557940 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:31.558088 kubelet[2667]: I0113 20:35:31.557960 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:35:31.683204 kubelet[2667]: E0113 20:35:31.683106 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:31.685574 kubelet[2667]: E0113 20:35:31.685548 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:31.685821 kubelet[2667]: E0113 20:35:31.685790 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:32.246476 kubelet[2667]: I0113 20:35:32.246416 2667 apiserver.go:52] "Watching apiserver" Jan 13 20:35:32.256389 kubelet[2667]: I0113 20:35:32.256235 2667 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:35:32.286528 kubelet[2667]: E0113 20:35:32.285703 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:32.286528 kubelet[2667]: E0113 20:35:32.286006 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:32.293497 kubelet[2667]: E0113 20:35:32.293446 2667 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:35:32.293826 kubelet[2667]: E0113 20:35:32.293803 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:32.339782 kubelet[2667]: I0113 20:35:32.339690 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.33966581 podStartE2EDuration="1.33966581s" podCreationTimestamp="2025-01-13 20:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:32.322703705 +0000 UTC m=+1.206526496" watchObservedRunningTime="2025-01-13 20:35:32.33966581 +0000 UTC m=+1.223488600" Jan 13 20:35:32.356278 kubelet[2667]: I0113 20:35:32.356181 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.35616234 podStartE2EDuration="1.35616234s" podCreationTimestamp="2025-01-13 20:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:32.34038372 +0000 UTC m=+1.224206530" watchObservedRunningTime="2025-01-13 20:35:32.35616234 +0000 UTC m=+1.239985130" Jan 13 20:35:33.289072 kubelet[2667]: E0113 20:35:33.288156 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:35.057339 update_engine[1469]: I20250113 20:35:35.057227 1469 update_attempter.cc:509] Updating boot flags... Jan 13 20:35:35.096376 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2743) Jan 13 20:35:35.143428 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2742) Jan 13 20:35:35.186325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2742) Jan 13 20:35:36.039451 sudo[1668]: pam_unix(sudo:session): session closed for user root Jan 13 20:35:36.041182 sshd[1667]: Connection closed by 10.0.0.1 port 33658 Jan 13 20:35:36.042039 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:36.046579 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:33658.service: Deactivated successfully. Jan 13 20:35:36.048917 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:35:36.049139 systemd[1]: session-7.scope: Consumed 5.210s CPU time, 191.8M memory peak, 0B memory swap peak. Jan 13 20:35:36.049732 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:35:36.050761 systemd-logind[1468]: Removed session 7. Jan 13 20:35:37.109375 kubelet[2667]: E0113 20:35:37.109214 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:37.121537 kubelet[2667]: I0113 20:35:37.121432 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.121414554 podStartE2EDuration="6.121414554s" podCreationTimestamp="2025-01-13 20:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:32.356400612 +0000 UTC m=+1.240223402" watchObservedRunningTime="2025-01-13 20:35:37.121414554 +0000 UTC m=+6.005237344" Jan 13 20:35:37.293144 kubelet[2667]: E0113 20:35:37.293105 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:40.603537 kubelet[2667]: E0113 20:35:40.603444 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:41.111696 kubelet[2667]: E0113 20:35:41.111654 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:41.300625 kubelet[2667]: E0113 20:35:41.300333 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:41.300625 kubelet[2667]: E0113 20:35:41.300452 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:44.399172 kubelet[2667]: I0113 20:35:44.398955 2667 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:35:44.400393 kubelet[2667]: I0113 20:35:44.399672 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:35:44.400444 containerd[1484]: time="2025-01-13T20:35:44.399426160Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:35:45.165750 kubelet[2667]: I0113 20:35:45.163152 2667 topology_manager.go:215] "Topology Admit Handler" podUID="9077679c-48cb-44e3-9dbb-49c8b1073369" podNamespace="kube-system" podName="kube-proxy-rvg2b" Jan 13 20:35:45.172484 systemd[1]: Created slice kubepods-besteffort-pod9077679c_48cb_44e3_9dbb_49c8b1073369.slice - libcontainer container kubepods-besteffort-pod9077679c_48cb_44e3_9dbb_49c8b1073369.slice. Jan 13 20:35:45.239545 kubelet[2667]: I0113 20:35:45.239507 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9077679c-48cb-44e3-9dbb-49c8b1073369-kube-proxy\") pod \"kube-proxy-rvg2b\" (UID: \"9077679c-48cb-44e3-9dbb-49c8b1073369\") " pod="kube-system/kube-proxy-rvg2b" Jan 13 20:35:45.239545 kubelet[2667]: I0113 20:35:45.239542 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thxk9\" (UniqueName: \"kubernetes.io/projected/9077679c-48cb-44e3-9dbb-49c8b1073369-kube-api-access-thxk9\") pod \"kube-proxy-rvg2b\" (UID: \"9077679c-48cb-44e3-9dbb-49c8b1073369\") " pod="kube-system/kube-proxy-rvg2b" Jan 13 20:35:45.239545 kubelet[2667]: I0113 20:35:45.239563 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9077679c-48cb-44e3-9dbb-49c8b1073369-xtables-lock\") pod \"kube-proxy-rvg2b\" (UID: \"9077679c-48cb-44e3-9dbb-49c8b1073369\") " pod="kube-system/kube-proxy-rvg2b" Jan 13 20:35:45.239768 kubelet[2667]: I0113 20:35:45.239576 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9077679c-48cb-44e3-9dbb-49c8b1073369-lib-modules\") pod \"kube-proxy-rvg2b\" (UID: \"9077679c-48cb-44e3-9dbb-49c8b1073369\") " pod="kube-system/kube-proxy-rvg2b" Jan 13 20:35:45.370316 kubelet[2667]: I0113 20:35:45.370240 2667 topology_manager.go:215] "Topology Admit Handler" podUID="92522ffb-0216-4bb0-bf17-cc46de136790" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-qdf5s" Jan 13 20:35:45.377859 systemd[1]: Created slice kubepods-besteffort-pod92522ffb_0216_4bb0_bf17_cc46de136790.slice - libcontainer container kubepods-besteffort-pod92522ffb_0216_4bb0_bf17_cc46de136790.slice. Jan 13 20:35:45.440964 kubelet[2667]: I0113 20:35:45.440808 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqj9b\" (UniqueName: \"kubernetes.io/projected/92522ffb-0216-4bb0-bf17-cc46de136790-kube-api-access-rqj9b\") pod \"tigera-operator-7bc55997bb-qdf5s\" (UID: \"92522ffb-0216-4bb0-bf17-cc46de136790\") " pod="tigera-operator/tigera-operator-7bc55997bb-qdf5s" Jan 13 20:35:45.440964 kubelet[2667]: I0113 20:35:45.440868 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92522ffb-0216-4bb0-bf17-cc46de136790-var-lib-calico\") pod \"tigera-operator-7bc55997bb-qdf5s\" (UID: \"92522ffb-0216-4bb0-bf17-cc46de136790\") " pod="tigera-operator/tigera-operator-7bc55997bb-qdf5s" Jan 13 20:35:45.486098 kubelet[2667]: E0113 20:35:45.486058 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:45.487103 containerd[1484]: time="2025-01-13T20:35:45.486686276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvg2b,Uid:9077679c-48cb-44e3-9dbb-49c8b1073369,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:45.511754 containerd[1484]: time="2025-01-13T20:35:45.511576266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:45.511754 containerd[1484]: time="2025-01-13T20:35:45.511692806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:45.511754 containerd[1484]: time="2025-01-13T20:35:45.511706952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:45.512005 containerd[1484]: time="2025-01-13T20:35:45.511828060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:45.535409 systemd[1]: Started cri-containerd-4b359a46eab8e401fcb947de93aa69c40f15077da07f4f5590aa0a820903739e.scope - libcontainer container 4b359a46eab8e401fcb947de93aa69c40f15077da07f4f5590aa0a820903739e. Jan 13 20:35:45.566969 containerd[1484]: time="2025-01-13T20:35:45.566897887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvg2b,Uid:9077679c-48cb-44e3-9dbb-49c8b1073369,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b359a46eab8e401fcb947de93aa69c40f15077da07f4f5590aa0a820903739e\"" Jan 13 20:35:45.567753 kubelet[2667]: E0113 20:35:45.567724 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:45.571572 containerd[1484]: time="2025-01-13T20:35:45.571508413Z" level=info msg="CreateContainer within sandbox \"4b359a46eab8e401fcb947de93aa69c40f15077da07f4f5590aa0a820903739e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:35:45.590770 containerd[1484]: time="2025-01-13T20:35:45.590705167Z" level=info msg="CreateContainer within sandbox \"4b359a46eab8e401fcb947de93aa69c40f15077da07f4f5590aa0a820903739e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5b2eb143dc1d9e62696a6c9af67429a97536fb36e2c1a7ad1fc567e301203e1\"" Jan 13 20:35:45.591380 containerd[1484]: time="2025-01-13T20:35:45.591339783Z" level=info msg="StartContainer for \"c5b2eb143dc1d9e62696a6c9af67429a97536fb36e2c1a7ad1fc567e301203e1\"" Jan 13 20:35:45.628462 systemd[1]: Started cri-containerd-c5b2eb143dc1d9e62696a6c9af67429a97536fb36e2c1a7ad1fc567e301203e1.scope - libcontainer container c5b2eb143dc1d9e62696a6c9af67429a97536fb36e2c1a7ad1fc567e301203e1. Jan 13 20:35:45.668688 containerd[1484]: time="2025-01-13T20:35:45.668632806Z" level=info msg="StartContainer for \"c5b2eb143dc1d9e62696a6c9af67429a97536fb36e2c1a7ad1fc567e301203e1\" returns successfully" Jan 13 20:35:45.682397 containerd[1484]: time="2025-01-13T20:35:45.682350679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-qdf5s,Uid:92522ffb-0216-4bb0-bf17-cc46de136790,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:35:45.722749 containerd[1484]: time="2025-01-13T20:35:45.722085356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:45.722749 containerd[1484]: time="2025-01-13T20:35:45.722165397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:45.722749 containerd[1484]: time="2025-01-13T20:35:45.722195904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:45.723127 containerd[1484]: time="2025-01-13T20:35:45.722985281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:45.744587 systemd[1]: Started cri-containerd-1ef346878e1f8d7112a0b6df725f522874cdf498a10e2b3505887ad1d40b5f25.scope - libcontainer container 1ef346878e1f8d7112a0b6df725f522874cdf498a10e2b3505887ad1d40b5f25. Jan 13 20:35:45.785467 containerd[1484]: time="2025-01-13T20:35:45.785379757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-qdf5s,Uid:92522ffb-0216-4bb0-bf17-cc46de136790,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1ef346878e1f8d7112a0b6df725f522874cdf498a10e2b3505887ad1d40b5f25\"" Jan 13 20:35:45.787835 containerd[1484]: time="2025-01-13T20:35:45.787782754Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:35:46.310697 kubelet[2667]: E0113 20:35:46.309987 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:46.318109 kubelet[2667]: I0113 20:35:46.318024 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rvg2b" podStartSLOduration=1.318005499 podStartE2EDuration="1.318005499s" podCreationTimestamp="2025-01-13 20:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:46.317929204 +0000 UTC m=+15.201752005" watchObservedRunningTime="2025-01-13 20:35:46.318005499 +0000 UTC m=+15.201828289" Jan 13 20:35:47.632173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928799740.mount: Deactivated successfully. Jan 13 20:35:48.578570 containerd[1484]: time="2025-01-13T20:35:48.578505909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:48.579629 containerd[1484]: time="2025-01-13T20:35:48.579585200Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Jan 13 20:35:48.580426 containerd[1484]: time="2025-01-13T20:35:48.580390807Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:48.582768 containerd[1484]: time="2025-01-13T20:35:48.582719671Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:48.583472 containerd[1484]: time="2025-01-13T20:35:48.583434826Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.795594303s" Jan 13 20:35:48.583472 containerd[1484]: time="2025-01-13T20:35:48.583463271Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 20:35:48.588291 containerd[1484]: time="2025-01-13T20:35:48.588227378Z" level=info msg="CreateContainer within sandbox \"1ef346878e1f8d7112a0b6df725f522874cdf498a10e2b3505887ad1d40b5f25\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:35:48.599935 containerd[1484]: time="2025-01-13T20:35:48.599896453Z" level=info msg="CreateContainer within sandbox \"1ef346878e1f8d7112a0b6df725f522874cdf498a10e2b3505887ad1d40b5f25\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a735b212f9aeb5486f726522ecc0101c0df511073322d8baf9b7774841eeb886\"" Jan 13 20:35:48.600355 containerd[1484]: time="2025-01-13T20:35:48.600293410Z" level=info msg="StartContainer for \"a735b212f9aeb5486f726522ecc0101c0df511073322d8baf9b7774841eeb886\"" Jan 13 20:35:48.624746 systemd[1]: run-containerd-runc-k8s.io-a735b212f9aeb5486f726522ecc0101c0df511073322d8baf9b7774841eeb886-runc.pVYQFp.mount: Deactivated successfully. Jan 13 20:35:48.640563 systemd[1]: Started cri-containerd-a735b212f9aeb5486f726522ecc0101c0df511073322d8baf9b7774841eeb886.scope - libcontainer container a735b212f9aeb5486f726522ecc0101c0df511073322d8baf9b7774841eeb886. Jan 13 20:35:48.671303 containerd[1484]: time="2025-01-13T20:35:48.669697242Z" level=info msg="StartContainer for \"a735b212f9aeb5486f726522ecc0101c0df511073322d8baf9b7774841eeb886\" returns successfully" Jan 13 20:35:49.327426 kubelet[2667]: I0113 20:35:49.327350 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-qdf5s" podStartSLOduration=1.527922859 podStartE2EDuration="4.327329159s" podCreationTimestamp="2025-01-13 20:35:45 +0000 UTC" firstStartedPulling="2025-01-13 20:35:45.787301888 +0000 UTC m=+14.671124678" lastFinishedPulling="2025-01-13 20:35:48.586708188 +0000 UTC m=+17.470530978" observedRunningTime="2025-01-13 20:35:49.327225424 +0000 UTC m=+18.211048224" watchObservedRunningTime="2025-01-13 20:35:49.327329159 +0000 UTC m=+18.211151949" Jan 13 20:35:51.508570 kubelet[2667]: I0113 20:35:51.508516 2667 topology_manager.go:215] "Topology Admit Handler" podUID="d5b8435e-5b4a-4ef7-b74c-f7239b680252" podNamespace="calico-system" podName="calico-typha-f4f4d5467-rrz4q" Jan 13 20:35:51.529428 systemd[1]: Created slice kubepods-besteffort-podd5b8435e_5b4a_4ef7_b74c_f7239b680252.slice - libcontainer container kubepods-besteffort-podd5b8435e_5b4a_4ef7_b74c_f7239b680252.slice. Jan 13 20:35:51.539879 kubelet[2667]: I0113 20:35:51.537979 2667 topology_manager.go:215] "Topology Admit Handler" podUID="ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" podNamespace="calico-system" podName="calico-node-5gwxk" Jan 13 20:35:51.547837 kubelet[2667]: W0113 20:35:51.547790 2667 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 13 20:35:51.547976 kubelet[2667]: E0113 20:35:51.547846 2667 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 13 20:35:51.547976 kubelet[2667]: W0113 20:35:51.547905 2667 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 13 20:35:51.547976 kubelet[2667]: E0113 20:35:51.547948 2667 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 13 20:35:51.551869 systemd[1]: Created slice kubepods-besteffort-podce1fad39_7ddb_4f27_8aaa_e7bd140c6cbf.slice - libcontainer container kubepods-besteffort-podce1fad39_7ddb_4f27_8aaa_e7bd140c6cbf.slice. Jan 13 20:35:51.581977 kubelet[2667]: I0113 20:35:51.581923 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-lib-calico\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.581977 kubelet[2667]: I0113 20:35:51.581993 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-node-certs\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582223 kubelet[2667]: I0113 20:35:51.582013 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-run-calico\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582223 kubelet[2667]: I0113 20:35:51.582035 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-lib-modules\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582223 kubelet[2667]: I0113 20:35:51.582055 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-log-dir\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582223 kubelet[2667]: I0113 20:35:51.582073 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbz5w\" (UniqueName: \"kubernetes.io/projected/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-kube-api-access-vbz5w\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582223 kubelet[2667]: I0113 20:35:51.582106 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-xtables-lock\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582430 kubelet[2667]: I0113 20:35:51.582124 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-net-dir\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582430 kubelet[2667]: I0113 20:35:51.582145 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b8435e-5b4a-4ef7-b74c-f7239b680252-tigera-ca-bundle\") pod \"calico-typha-f4f4d5467-rrz4q\" (UID: \"d5b8435e-5b4a-4ef7-b74c-f7239b680252\") " pod="calico-system/calico-typha-f4f4d5467-rrz4q" Jan 13 20:35:51.582430 kubelet[2667]: I0113 20:35:51.582169 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-policysync\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582430 kubelet[2667]: I0113 20:35:51.582186 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-tigera-ca-bundle\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582430 kubelet[2667]: I0113 20:35:51.582213 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkng5\" (UniqueName: \"kubernetes.io/projected/d5b8435e-5b4a-4ef7-b74c-f7239b680252-kube-api-access-xkng5\") pod \"calico-typha-f4f4d5467-rrz4q\" (UID: \"d5b8435e-5b4a-4ef7-b74c-f7239b680252\") " pod="calico-system/calico-typha-f4f4d5467-rrz4q" Jan 13 20:35:51.582609 kubelet[2667]: I0113 20:35:51.582235 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d5b8435e-5b4a-4ef7-b74c-f7239b680252-typha-certs\") pod \"calico-typha-f4f4d5467-rrz4q\" (UID: \"d5b8435e-5b4a-4ef7-b74c-f7239b680252\") " pod="calico-system/calico-typha-f4f4d5467-rrz4q" Jan 13 20:35:51.582609 kubelet[2667]: I0113 20:35:51.582277 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-bin-dir\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.582609 kubelet[2667]: I0113 20:35:51.582297 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-flexvol-driver-host\") pod \"calico-node-5gwxk\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " pod="calico-system/calico-node-5gwxk" Jan 13 20:35:51.670414 kubelet[2667]: I0113 20:35:51.670356 2667 topology_manager.go:215] "Topology Admit Handler" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" podNamespace="calico-system" podName="csi-node-driver-kbrd5" Jan 13 20:35:51.682482 kubelet[2667]: E0113 20:35:51.682433 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:35:51.699390 kubelet[2667]: E0113 20:35:51.699333 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.699390 kubelet[2667]: W0113 20:35:51.699367 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.699390 kubelet[2667]: E0113 20:35:51.699396 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.701421 kubelet[2667]: E0113 20:35:51.701381 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.701421 kubelet[2667]: W0113 20:35:51.701407 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.701421 kubelet[2667]: E0113 20:35:51.701426 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.701697 kubelet[2667]: E0113 20:35:51.701671 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.701697 kubelet[2667]: W0113 20:35:51.701686 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.701817 kubelet[2667]: E0113 20:35:51.701709 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.702258 kubelet[2667]: E0113 20:35:51.702219 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.702258 kubelet[2667]: W0113 20:35:51.702254 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.702258 kubelet[2667]: E0113 20:35:51.702269 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.711205 kubelet[2667]: E0113 20:35:51.710872 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.711205 kubelet[2667]: W0113 20:35:51.710897 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.711205 kubelet[2667]: E0113 20:35:51.710927 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.712064 kubelet[2667]: E0113 20:35:51.711229 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.712064 kubelet[2667]: W0113 20:35:51.711240 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.712064 kubelet[2667]: E0113 20:35:51.711288 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.712064 kubelet[2667]: E0113 20:35:51.711653 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.712064 kubelet[2667]: W0113 20:35:51.711664 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.712064 kubelet[2667]: E0113 20:35:51.711676 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.769715 kubelet[2667]: E0113 20:35:51.769598 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.769715 kubelet[2667]: W0113 20:35:51.769622 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.769715 kubelet[2667]: E0113 20:35:51.769643 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.769903 kubelet[2667]: E0113 20:35:51.769841 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.769903 kubelet[2667]: W0113 20:35:51.769851 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.769903 kubelet[2667]: E0113 20:35:51.769861 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.770096 kubelet[2667]: E0113 20:35:51.770072 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.770096 kubelet[2667]: W0113 20:35:51.770092 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.770182 kubelet[2667]: E0113 20:35:51.770103 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.770479 kubelet[2667]: E0113 20:35:51.770462 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.770479 kubelet[2667]: W0113 20:35:51.770476 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.770650 kubelet[2667]: E0113 20:35:51.770486 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.770741 kubelet[2667]: E0113 20:35:51.770723 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.770741 kubelet[2667]: W0113 20:35:51.770739 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.770834 kubelet[2667]: E0113 20:35:51.770750 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.770987 kubelet[2667]: E0113 20:35:51.770967 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.770987 kubelet[2667]: W0113 20:35:51.770983 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.771198 kubelet[2667]: E0113 20:35:51.770995 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.771231 kubelet[2667]: E0113 20:35:51.771221 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.771279 kubelet[2667]: W0113 20:35:51.771232 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.771279 kubelet[2667]: E0113 20:35:51.771256 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.771662 kubelet[2667]: E0113 20:35:51.771649 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.771662 kubelet[2667]: W0113 20:35:51.771660 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.771731 kubelet[2667]: E0113 20:35:51.771671 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.771893 kubelet[2667]: E0113 20:35:51.771880 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.772108 kubelet[2667]: W0113 20:35:51.772073 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.772108 kubelet[2667]: E0113 20:35:51.772104 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.772353 kubelet[2667]: E0113 20:35:51.772339 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.772353 kubelet[2667]: W0113 20:35:51.772351 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.772477 kubelet[2667]: E0113 20:35:51.772360 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.772573 kubelet[2667]: E0113 20:35:51.772560 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.772604 kubelet[2667]: W0113 20:35:51.772573 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.772604 kubelet[2667]: E0113 20:35:51.772582 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.772785 kubelet[2667]: E0113 20:35:51.772772 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.772785 kubelet[2667]: W0113 20:35:51.772784 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.772856 kubelet[2667]: E0113 20:35:51.772794 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.772988 kubelet[2667]: E0113 20:35:51.772976 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.772988 kubelet[2667]: W0113 20:35:51.772986 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.773045 kubelet[2667]: E0113 20:35:51.772995 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.773204 kubelet[2667]: E0113 20:35:51.773188 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.773233 kubelet[2667]: W0113 20:35:51.773204 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.773233 kubelet[2667]: E0113 20:35:51.773214 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.773469 kubelet[2667]: E0113 20:35:51.773456 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.773512 kubelet[2667]: W0113 20:35:51.773468 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.773512 kubelet[2667]: E0113 20:35:51.773478 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.773663 kubelet[2667]: E0113 20:35:51.773651 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.773663 kubelet[2667]: W0113 20:35:51.773662 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.773719 kubelet[2667]: E0113 20:35:51.773671 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.773870 kubelet[2667]: E0113 20:35:51.773857 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.773870 kubelet[2667]: W0113 20:35:51.773868 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.773924 kubelet[2667]: E0113 20:35:51.773878 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.774098 kubelet[2667]: E0113 20:35:51.774077 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.774121 kubelet[2667]: W0113 20:35:51.774096 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.774121 kubelet[2667]: E0113 20:35:51.774106 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.774332 kubelet[2667]: E0113 20:35:51.774320 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.774332 kubelet[2667]: W0113 20:35:51.774331 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.774392 kubelet[2667]: E0113 20:35:51.774340 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.774531 kubelet[2667]: E0113 20:35:51.774520 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.774531 kubelet[2667]: W0113 20:35:51.774530 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.774581 kubelet[2667]: E0113 20:35:51.774539 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.795008 kubelet[2667]: E0113 20:35:51.794991 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.795008 kubelet[2667]: W0113 20:35:51.795004 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.795117 kubelet[2667]: E0113 20:35:51.795015 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.795117 kubelet[2667]: I0113 20:35:51.795041 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d-registration-dir\") pod \"csi-node-driver-kbrd5\" (UID: \"6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d\") " pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:35:51.795272 kubelet[2667]: E0113 20:35:51.795233 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.795313 kubelet[2667]: W0113 20:35:51.795274 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.795313 kubelet[2667]: E0113 20:35:51.795290 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.795313 kubelet[2667]: I0113 20:35:51.795305 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5dv\" (UniqueName: \"kubernetes.io/projected/6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d-kube-api-access-gt5dv\") pod \"csi-node-driver-kbrd5\" (UID: \"6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d\") " pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:35:51.795505 kubelet[2667]: E0113 20:35:51.795491 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.795505 kubelet[2667]: W0113 20:35:51.795502 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.795592 kubelet[2667]: E0113 20:35:51.795515 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.795592 kubelet[2667]: I0113 20:35:51.795528 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d-kubelet-dir\") pod \"csi-node-driver-kbrd5\" (UID: \"6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d\") " pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:35:51.795723 kubelet[2667]: E0113 20:35:51.795702 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.795723 kubelet[2667]: W0113 20:35:51.795713 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.795791 kubelet[2667]: E0113 20:35:51.795725 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.795791 kubelet[2667]: I0113 20:35:51.795738 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d-varrun\") pod \"csi-node-driver-kbrd5\" (UID: \"6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d\") " pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:35:51.795922 kubelet[2667]: E0113 20:35:51.795908 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.795922 kubelet[2667]: W0113 20:35:51.795920 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.795992 kubelet[2667]: E0113 20:35:51.795931 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.796129 kubelet[2667]: E0113 20:35:51.796114 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.796129 kubelet[2667]: W0113 20:35:51.796126 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.796211 kubelet[2667]: E0113 20:35:51.796140 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.796211 kubelet[2667]: I0113 20:35:51.796154 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d-socket-dir\") pod \"csi-node-driver-kbrd5\" (UID: \"6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d\") " pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:35:51.796358 kubelet[2667]: E0113 20:35:51.796344 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.796358 kubelet[2667]: W0113 20:35:51.796355 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.796434 kubelet[2667]: E0113 20:35:51.796367 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.796533 kubelet[2667]: E0113 20:35:51.796521 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.796533 kubelet[2667]: W0113 20:35:51.796530 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.796607 kubelet[2667]: E0113 20:35:51.796541 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.796723 kubelet[2667]: E0113 20:35:51.796710 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.796723 kubelet[2667]: W0113 20:35:51.796719 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.796790 kubelet[2667]: E0113 20:35:51.796734 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.796919 kubelet[2667]: E0113 20:35:51.796905 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.796919 kubelet[2667]: W0113 20:35:51.796916 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.796999 kubelet[2667]: E0113 20:35:51.796941 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.797127 kubelet[2667]: E0113 20:35:51.797114 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.797127 kubelet[2667]: W0113 20:35:51.797123 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.797191 kubelet[2667]: E0113 20:35:51.797144 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.797312 kubelet[2667]: E0113 20:35:51.797300 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.797312 kubelet[2667]: W0113 20:35:51.797309 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.797389 kubelet[2667]: E0113 20:35:51.797328 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.797482 kubelet[2667]: E0113 20:35:51.797470 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.797482 kubelet[2667]: W0113 20:35:51.797478 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.797548 kubelet[2667]: E0113 20:35:51.797496 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.797669 kubelet[2667]: E0113 20:35:51.797653 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.797669 kubelet[2667]: W0113 20:35:51.797665 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.797770 kubelet[2667]: E0113 20:35:51.797673 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.797851 kubelet[2667]: E0113 20:35:51.797835 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.797851 kubelet[2667]: W0113 20:35:51.797846 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.797925 kubelet[2667]: E0113 20:35:51.797854 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.798032 kubelet[2667]: E0113 20:35:51.798020 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.798032 kubelet[2667]: W0113 20:35:51.798028 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.798154 kubelet[2667]: E0113 20:35:51.798036 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.841112 kubelet[2667]: E0113 20:35:51.841061 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:51.843444 containerd[1484]: time="2025-01-13T20:35:51.843409122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4f4d5467-rrz4q,Uid:d5b8435e-5b4a-4ef7-b74c-f7239b680252,Namespace:calico-system,Attempt:0,}" Jan 13 20:35:51.879052 containerd[1484]: time="2025-01-13T20:35:51.878924859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:51.879317 containerd[1484]: time="2025-01-13T20:35:51.879012033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:51.879317 containerd[1484]: time="2025-01-13T20:35:51.879158108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:51.879578 containerd[1484]: time="2025-01-13T20:35:51.879532031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:51.897863 kubelet[2667]: E0113 20:35:51.897511 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.897863 kubelet[2667]: W0113 20:35:51.897540 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.898364 kubelet[2667]: E0113 20:35:51.898169 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.898827 kubelet[2667]: E0113 20:35:51.898715 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.898827 kubelet[2667]: W0113 20:35:51.898731 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.900432 kubelet[2667]: E0113 20:35:51.898757 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.903338 kubelet[2667]: E0113 20:35:51.903285 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.903338 kubelet[2667]: W0113 20:35:51.903314 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.903338 kubelet[2667]: E0113 20:35:51.903339 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.904439 kubelet[2667]: E0113 20:35:51.904059 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.904439 kubelet[2667]: W0113 20:35:51.904122 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.904439 kubelet[2667]: E0113 20:35:51.904138 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.904531 kubelet[2667]: E0113 20:35:51.904495 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.904556 kubelet[2667]: W0113 20:35:51.904509 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.904577 kubelet[2667]: E0113 20:35:51.904568 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.905710 kubelet[2667]: E0113 20:35:51.905675 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.905710 kubelet[2667]: W0113 20:35:51.905694 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.905908 kubelet[2667]: E0113 20:35:51.905853 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.907692 systemd[1]: Started cri-containerd-6fbeabcea944c65a7e44424f1df4a9225d1cfc03d73df08f90ad3322c67b57af.scope - libcontainer container 6fbeabcea944c65a7e44424f1df4a9225d1cfc03d73df08f90ad3322c67b57af. Jan 13 20:35:51.907966 kubelet[2667]: E0113 20:35:51.907816 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.907966 kubelet[2667]: W0113 20:35:51.907833 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.908736 kubelet[2667]: E0113 20:35:51.908140 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.910104 kubelet[2667]: E0113 20:35:51.909014 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.910104 kubelet[2667]: W0113 20:35:51.909026 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.910104 kubelet[2667]: E0113 20:35:51.910020 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.910989 kubelet[2667]: E0113 20:35:51.910959 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.910989 kubelet[2667]: W0113 20:35:51.910978 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.911489 kubelet[2667]: E0113 20:35:51.911455 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.912045 kubelet[2667]: E0113 20:35:51.912009 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.912045 kubelet[2667]: W0113 20:35:51.912029 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.912846 kubelet[2667]: E0113 20:35:51.912713 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.914148 kubelet[2667]: E0113 20:35:51.913550 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.914148 kubelet[2667]: W0113 20:35:51.913566 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.914512 kubelet[2667]: E0113 20:35:51.913585 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.915330 kubelet[2667]: E0113 20:35:51.915302 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.915330 kubelet[2667]: W0113 20:35:51.915325 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.915402 kubelet[2667]: E0113 20:35:51.915355 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.916366 kubelet[2667]: E0113 20:35:51.916343 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.916366 kubelet[2667]: W0113 20:35:51.916363 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.917123 kubelet[2667]: E0113 20:35:51.917094 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.917566 kubelet[2667]: E0113 20:35:51.917545 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.917566 kubelet[2667]: W0113 20:35:51.917564 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.918098 kubelet[2667]: E0113 20:35:51.918053 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.918533 kubelet[2667]: E0113 20:35:51.918504 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.918533 kubelet[2667]: W0113 20:35:51.918526 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.919058 kubelet[2667]: E0113 20:35:51.919027 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.919940 kubelet[2667]: E0113 20:35:51.919922 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.919940 kubelet[2667]: W0113 20:35:51.919938 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.920142 kubelet[2667]: E0113 20:35:51.920115 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.921034 kubelet[2667]: E0113 20:35:51.921012 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.921102 kubelet[2667]: W0113 20:35:51.921034 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.921140 kubelet[2667]: E0113 20:35:51.921125 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.921385 kubelet[2667]: E0113 20:35:51.921370 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.921409 kubelet[2667]: W0113 20:35:51.921385 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.921572 kubelet[2667]: E0113 20:35:51.921498 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.921646 kubelet[2667]: E0113 20:35:51.921632 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.921675 kubelet[2667]: W0113 20:35:51.921645 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.921956 kubelet[2667]: E0113 20:35:51.921935 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.922319 kubelet[2667]: E0113 20:35:51.922270 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.922319 kubelet[2667]: W0113 20:35:51.922286 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.922616 kubelet[2667]: E0113 20:35:51.922586 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.923186 kubelet[2667]: E0113 20:35:51.923169 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.923186 kubelet[2667]: W0113 20:35:51.923185 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.923466 kubelet[2667]: E0113 20:35:51.923444 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.923792 kubelet[2667]: E0113 20:35:51.923776 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.923845 kubelet[2667]: W0113 20:35:51.923793 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.923917 kubelet[2667]: E0113 20:35:51.923899 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.924616 kubelet[2667]: E0113 20:35:51.924599 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.924680 kubelet[2667]: W0113 20:35:51.924616 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.924945 kubelet[2667]: E0113 20:35:51.924918 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.925425 kubelet[2667]: E0113 20:35:51.925397 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.925425 kubelet[2667]: W0113 20:35:51.925413 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.925484 kubelet[2667]: E0113 20:35:51.925440 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.925819 kubelet[2667]: E0113 20:35:51.925798 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.926156 kubelet[2667]: W0113 20:35:51.925913 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.926156 kubelet[2667]: E0113 20:35:51.925944 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.926505 kubelet[2667]: E0113 20:35:51.926489 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.926558 kubelet[2667]: W0113 20:35:51.926505 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.926558 kubelet[2667]: E0113 20:35:51.926519 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.938403 kubelet[2667]: E0113 20:35:51.938370 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:51.938403 kubelet[2667]: W0113 20:35:51.938396 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:51.938565 kubelet[2667]: E0113 20:35:51.938420 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:51.965558 containerd[1484]: time="2025-01-13T20:35:51.965488640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4f4d5467-rrz4q,Uid:d5b8435e-5b4a-4ef7-b74c-f7239b680252,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fbeabcea944c65a7e44424f1df4a9225d1cfc03d73df08f90ad3322c67b57af\"" Jan 13 20:35:52.014391 kubelet[2667]: E0113 20:35:52.014350 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:52.016806 containerd[1484]: time="2025-01-13T20:35:52.016775667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:35:52.020701 kubelet[2667]: E0113 20:35:52.020610 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:52.020701 kubelet[2667]: W0113 20:35:52.020630 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:52.020701 kubelet[2667]: E0113 20:35:52.020648 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:52.121392 kubelet[2667]: E0113 20:35:52.121343 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:52.121392 kubelet[2667]: W0113 20:35:52.121372 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:52.121392 kubelet[2667]: E0113 20:35:52.121395 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:52.222679 kubelet[2667]: E0113 20:35:52.222635 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:52.222679 kubelet[2667]: W0113 20:35:52.222655 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:52.222679 kubelet[2667]: E0113 20:35:52.222671 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:52.323268 kubelet[2667]: E0113 20:35:52.323119 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:52.323268 kubelet[2667]: W0113 20:35:52.323139 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:52.323268 kubelet[2667]: E0113 20:35:52.323158 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:52.423645 kubelet[2667]: E0113 20:35:52.423613 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:52.423645 kubelet[2667]: W0113 20:35:52.423634 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:52.423645 kubelet[2667]: E0113 20:35:52.423653 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:52.524628 kubelet[2667]: E0113 20:35:52.524570 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:52.524628 kubelet[2667]: W0113 20:35:52.524601 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:52.524628 kubelet[2667]: E0113 20:35:52.524620 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:52.541211 kubelet[2667]: E0113 20:35:52.541172 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:52.541211 kubelet[2667]: W0113 20:35:52.541197 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:52.541211 kubelet[2667]: E0113 20:35:52.541217 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:52.755431 kubelet[2667]: E0113 20:35:52.755385 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:52.756054 containerd[1484]: time="2025-01-13T20:35:52.755870231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gwxk,Uid:ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf,Namespace:calico-system,Attempt:0,}" Jan 13 20:35:52.780470 containerd[1484]: time="2025-01-13T20:35:52.780273815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:52.781233 containerd[1484]: time="2025-01-13T20:35:52.781167145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:52.781233 containerd[1484]: time="2025-01-13T20:35:52.781202873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:52.781385 containerd[1484]: time="2025-01-13T20:35:52.781320784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:52.809445 systemd[1]: Started cri-containerd-1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda.scope - libcontainer container 1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda. Jan 13 20:35:52.838904 containerd[1484]: time="2025-01-13T20:35:52.838725714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gwxk,Uid:ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\"" Jan 13 20:35:52.839778 kubelet[2667]: E0113 20:35:52.839756 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:53.272827 kubelet[2667]: E0113 20:35:53.272774 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:35:53.710034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631357253.mount: Deactivated successfully. Jan 13 20:35:55.049355 containerd[1484]: time="2025-01-13T20:35:55.049298584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:55.050135 containerd[1484]: time="2025-01-13T20:35:55.050066857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 20:35:55.051235 containerd[1484]: time="2025-01-13T20:35:55.051193946Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:55.053318 containerd[1484]: time="2025-01-13T20:35:55.053279767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:55.053936 containerd[1484]: time="2025-01-13T20:35:55.053900784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.037089991s" Jan 13 20:35:55.053936 containerd[1484]: time="2025-01-13T20:35:55.053931903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:35:55.055206 containerd[1484]: time="2025-01-13T20:35:55.054780118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:35:55.062966 containerd[1484]: time="2025-01-13T20:35:55.062612399Z" level=info msg="CreateContainer within sandbox \"6fbeabcea944c65a7e44424f1df4a9225d1cfc03d73df08f90ad3322c67b57af\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:35:55.079149 containerd[1484]: time="2025-01-13T20:35:55.079096824Z" level=info msg="CreateContainer within sandbox \"6fbeabcea944c65a7e44424f1df4a9225d1cfc03d73df08f90ad3322c67b57af\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"43a79136bce0a6a74f5b811cf8700c71c292db51842d06802a46c5f084417bca\"" Jan 13 20:35:55.079682 containerd[1484]: time="2025-01-13T20:35:55.079653321Z" level=info msg="StartContainer for \"43a79136bce0a6a74f5b811cf8700c71c292db51842d06802a46c5f084417bca\"" Jan 13 20:35:55.110387 systemd[1]: Started cri-containerd-43a79136bce0a6a74f5b811cf8700c71c292db51842d06802a46c5f084417bca.scope - libcontainer container 43a79136bce0a6a74f5b811cf8700c71c292db51842d06802a46c5f084417bca. Jan 13 20:35:55.152993 containerd[1484]: time="2025-01-13T20:35:55.152944804Z" level=info msg="StartContainer for \"43a79136bce0a6a74f5b811cf8700c71c292db51842d06802a46c5f084417bca\" returns successfully" Jan 13 20:35:55.273145 kubelet[2667]: E0113 20:35:55.273079 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:35:55.330388 kubelet[2667]: E0113 20:35:55.330269 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:55.398138 kubelet[2667]: E0113 20:35:55.398104 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.398138 kubelet[2667]: W0113 20:35:55.398128 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.398138 kubelet[2667]: E0113 20:35:55.398150 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.398425 kubelet[2667]: E0113 20:35:55.398387 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.398425 kubelet[2667]: W0113 20:35:55.398396 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.398425 kubelet[2667]: E0113 20:35:55.398405 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.398626 kubelet[2667]: E0113 20:35:55.398613 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.398626 kubelet[2667]: W0113 20:35:55.398623 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.398680 kubelet[2667]: E0113 20:35:55.398632 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.398840 kubelet[2667]: E0113 20:35:55.398828 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.398840 kubelet[2667]: W0113 20:35:55.398838 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.398895 kubelet[2667]: E0113 20:35:55.398846 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.399073 kubelet[2667]: E0113 20:35:55.399061 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.399073 kubelet[2667]: W0113 20:35:55.399071 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.399128 kubelet[2667]: E0113 20:35:55.399079 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.399292 kubelet[2667]: E0113 20:35:55.399281 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.399292 kubelet[2667]: W0113 20:35:55.399290 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.399352 kubelet[2667]: E0113 20:35:55.399299 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.399515 kubelet[2667]: E0113 20:35:55.399503 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.399515 kubelet[2667]: W0113 20:35:55.399513 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.399562 kubelet[2667]: E0113 20:35:55.399521 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.399716 kubelet[2667]: E0113 20:35:55.399705 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.399716 kubelet[2667]: W0113 20:35:55.399714 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.399766 kubelet[2667]: E0113 20:35:55.399722 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.399936 kubelet[2667]: E0113 20:35:55.399924 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.399936 kubelet[2667]: W0113 20:35:55.399934 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.399998 kubelet[2667]: E0113 20:35:55.399942 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.400151 kubelet[2667]: E0113 20:35:55.400130 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.400151 kubelet[2667]: W0113 20:35:55.400140 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.400151 kubelet[2667]: E0113 20:35:55.400148 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.400375 kubelet[2667]: E0113 20:35:55.400363 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.400375 kubelet[2667]: W0113 20:35:55.400373 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.400442 kubelet[2667]: E0113 20:35:55.400381 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.400603 kubelet[2667]: E0113 20:35:55.400584 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.400603 kubelet[2667]: W0113 20:35:55.400599 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.400706 kubelet[2667]: E0113 20:35:55.400611 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.400865 kubelet[2667]: E0113 20:35:55.400836 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.400865 kubelet[2667]: W0113 20:35:55.400848 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.400865 kubelet[2667]: E0113 20:35:55.400858 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.401096 kubelet[2667]: E0113 20:35:55.401079 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.401096 kubelet[2667]: W0113 20:35:55.401093 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.401189 kubelet[2667]: E0113 20:35:55.401104 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.401416 kubelet[2667]: E0113 20:35:55.401340 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.401416 kubelet[2667]: W0113 20:35:55.401357 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.401416 kubelet[2667]: E0113 20:35:55.401371 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.444932 kubelet[2667]: E0113 20:35:55.444896 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.444932 kubelet[2667]: W0113 20:35:55.444922 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.445082 kubelet[2667]: E0113 20:35:55.444941 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.445263 kubelet[2667]: E0113 20:35:55.445225 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.445263 kubelet[2667]: W0113 20:35:55.445238 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.445354 kubelet[2667]: E0113 20:35:55.445278 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.445534 kubelet[2667]: E0113 20:35:55.445510 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.445534 kubelet[2667]: W0113 20:35:55.445525 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.445619 kubelet[2667]: E0113 20:35:55.445543 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.445793 kubelet[2667]: E0113 20:35:55.445762 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.445793 kubelet[2667]: W0113 20:35:55.445775 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.445793 kubelet[2667]: E0113 20:35:55.445791 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.446066 kubelet[2667]: E0113 20:35:55.446044 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.446066 kubelet[2667]: W0113 20:35:55.446057 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.446156 kubelet[2667]: E0113 20:35:55.446073 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.446342 kubelet[2667]: E0113 20:35:55.446320 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.446342 kubelet[2667]: W0113 20:35:55.446333 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.446433 kubelet[2667]: E0113 20:35:55.446350 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.446647 kubelet[2667]: E0113 20:35:55.446628 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.446647 kubelet[2667]: W0113 20:35:55.446640 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.446736 kubelet[2667]: E0113 20:35:55.446656 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.446889 kubelet[2667]: E0113 20:35:55.446872 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.446889 kubelet[2667]: W0113 20:35:55.446885 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.446975 kubelet[2667]: E0113 20:35:55.446901 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.447157 kubelet[2667]: E0113 20:35:55.447140 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.447157 kubelet[2667]: W0113 20:35:55.447152 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.447265 kubelet[2667]: E0113 20:35:55.447167 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.447403 kubelet[2667]: E0113 20:35:55.447386 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.447403 kubelet[2667]: W0113 20:35:55.447398 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.447486 kubelet[2667]: E0113 20:35:55.447423 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.447624 kubelet[2667]: E0113 20:35:55.447607 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.447624 kubelet[2667]: W0113 20:35:55.447620 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.447695 kubelet[2667]: E0113 20:35:55.447635 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.447854 kubelet[2667]: E0113 20:35:55.447838 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.447854 kubelet[2667]: W0113 20:35:55.447850 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.447934 kubelet[2667]: E0113 20:35:55.447864 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.448096 kubelet[2667]: E0113 20:35:55.448079 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.448096 kubelet[2667]: W0113 20:35:55.448093 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.448170 kubelet[2667]: E0113 20:35:55.448107 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.448364 kubelet[2667]: E0113 20:35:55.448347 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.448364 kubelet[2667]: W0113 20:35:55.448359 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.448450 kubelet[2667]: E0113 20:35:55.448377 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.448663 kubelet[2667]: E0113 20:35:55.448645 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.448663 kubelet[2667]: W0113 20:35:55.448658 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.448752 kubelet[2667]: E0113 20:35:55.448673 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.448902 kubelet[2667]: E0113 20:35:55.448885 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.448902 kubelet[2667]: W0113 20:35:55.448898 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.448982 kubelet[2667]: E0113 20:35:55.448913 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.449153 kubelet[2667]: E0113 20:35:55.449132 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.449153 kubelet[2667]: W0113 20:35:55.449148 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.449267 kubelet[2667]: E0113 20:35:55.449161 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:55.449558 kubelet[2667]: E0113 20:35:55.449541 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:55.449558 kubelet[2667]: W0113 20:35:55.449553 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:55.449643 kubelet[2667]: E0113 20:35:55.449565 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.333384 kubelet[2667]: I0113 20:35:56.333344 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:56.333974 kubelet[2667]: E0113 20:35:56.333950 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:56.408384 kubelet[2667]: E0113 20:35:56.408322 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.408384 kubelet[2667]: W0113 20:35:56.408368 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.408384 kubelet[2667]: E0113 20:35:56.408395 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.408804 kubelet[2667]: E0113 20:35:56.408783 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.408804 kubelet[2667]: W0113 20:35:56.408799 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.408925 kubelet[2667]: E0113 20:35:56.408811 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.409262 kubelet[2667]: E0113 20:35:56.409203 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.409262 kubelet[2667]: W0113 20:35:56.409225 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.409362 kubelet[2667]: E0113 20:35:56.409276 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.410354 kubelet[2667]: E0113 20:35:56.410102 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.410354 kubelet[2667]: W0113 20:35:56.410133 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.410354 kubelet[2667]: E0113 20:35:56.410161 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.410622 kubelet[2667]: E0113 20:35:56.410603 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.410622 kubelet[2667]: W0113 20:35:56.410619 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.410874 kubelet[2667]: E0113 20:35:56.410631 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.416947 kubelet[2667]: E0113 20:35:56.416908 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.416947 kubelet[2667]: W0113 20:35:56.416941 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.417236 kubelet[2667]: E0113 20:35:56.416963 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.417524 kubelet[2667]: E0113 20:35:56.417326 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.417524 kubelet[2667]: W0113 20:35:56.417343 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.417524 kubelet[2667]: E0113 20:35:56.417354 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.417917 kubelet[2667]: E0113 20:35:56.417897 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.417917 kubelet[2667]: W0113 20:35:56.417915 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.418019 kubelet[2667]: E0113 20:35:56.417927 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.418262 kubelet[2667]: E0113 20:35:56.418217 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.418262 kubelet[2667]: W0113 20:35:56.418232 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.418431 kubelet[2667]: E0113 20:35:56.418282 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.419140 kubelet[2667]: E0113 20:35:56.418527 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.419140 kubelet[2667]: W0113 20:35:56.418544 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.419140 kubelet[2667]: E0113 20:35:56.418556 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.419140 kubelet[2667]: E0113 20:35:56.418800 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.419140 kubelet[2667]: W0113 20:35:56.418811 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.419140 kubelet[2667]: E0113 20:35:56.418838 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.419140 kubelet[2667]: E0113 20:35:56.419074 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.419140 kubelet[2667]: W0113 20:35:56.419084 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.419140 kubelet[2667]: E0113 20:35:56.419094 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.419423 kubelet[2667]: E0113 20:35:56.419349 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.419423 kubelet[2667]: W0113 20:35:56.419360 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.419423 kubelet[2667]: E0113 20:35:56.419371 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.419657 kubelet[2667]: E0113 20:35:56.419632 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.419657 kubelet[2667]: W0113 20:35:56.419651 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.419729 kubelet[2667]: E0113 20:35:56.419663 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.419908 kubelet[2667]: E0113 20:35:56.419890 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.419908 kubelet[2667]: W0113 20:35:56.419906 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.419984 kubelet[2667]: E0113 20:35:56.419917 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.427786 containerd[1484]: time="2025-01-13T20:35:56.427740308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:56.428704 containerd[1484]: time="2025-01-13T20:35:56.428648434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 20:35:56.429871 containerd[1484]: time="2025-01-13T20:35:56.429829123Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:56.432202 containerd[1484]: time="2025-01-13T20:35:56.432144775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:56.432860 containerd[1484]: time="2025-01-13T20:35:56.432824783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.378010491s" Jan 13 20:35:56.432930 containerd[1484]: time="2025-01-13T20:35:56.432863255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:35:56.435373 containerd[1484]: time="2025-01-13T20:35:56.435292731Z" level=info msg="CreateContainer within sandbox \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:35:56.452070 containerd[1484]: time="2025-01-13T20:35:56.452019267Z" level=info msg="CreateContainer within sandbox \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\"" Jan 13 20:35:56.452591 kubelet[2667]: E0113 20:35:56.452557 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.452657 kubelet[2667]: W0113 20:35:56.452591 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.452657 kubelet[2667]: E0113 20:35:56.452621 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.452730 containerd[1484]: time="2025-01-13T20:35:56.452594087Z" level=info msg="StartContainer for \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\"" Jan 13 20:35:56.453460 kubelet[2667]: E0113 20:35:56.453156 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.453460 kubelet[2667]: W0113 20:35:56.453174 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.453460 kubelet[2667]: E0113 20:35:56.453196 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.453689 kubelet[2667]: E0113 20:35:56.453656 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.453689 kubelet[2667]: W0113 20:35:56.453679 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.453900 kubelet[2667]: E0113 20:35:56.453711 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.454050 kubelet[2667]: E0113 20:35:56.454034 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.454050 kubelet[2667]: W0113 20:35:56.454046 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.454222 kubelet[2667]: E0113 20:35:56.454071 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.454344 kubelet[2667]: E0113 20:35:56.454325 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.454344 kubelet[2667]: W0113 20:35:56.454339 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.454418 kubelet[2667]: E0113 20:35:56.454372 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.454688 kubelet[2667]: E0113 20:35:56.454667 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.454688 kubelet[2667]: W0113 20:35:56.454683 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.454773 kubelet[2667]: E0113 20:35:56.454717 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.454969 kubelet[2667]: E0113 20:35:56.454945 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.454969 kubelet[2667]: W0113 20:35:56.454963 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.455088 kubelet[2667]: E0113 20:35:56.455063 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.455315 kubelet[2667]: E0113 20:35:56.455289 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.455315 kubelet[2667]: W0113 20:35:56.455307 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.455414 kubelet[2667]: E0113 20:35:56.455328 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.455623 kubelet[2667]: E0113 20:35:56.455594 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.455623 kubelet[2667]: W0113 20:35:56.455611 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.455704 kubelet[2667]: E0113 20:35:56.455632 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.455930 kubelet[2667]: E0113 20:35:56.455903 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.455930 kubelet[2667]: W0113 20:35:56.455922 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.456004 kubelet[2667]: E0113 20:35:56.455943 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.456332 kubelet[2667]: E0113 20:35:56.456315 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.456396 kubelet[2667]: W0113 20:35:56.456331 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.456396 kubelet[2667]: E0113 20:35:56.456351 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.456716 kubelet[2667]: E0113 20:35:56.456696 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.456716 kubelet[2667]: W0113 20:35:56.456712 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.456799 kubelet[2667]: E0113 20:35:56.456731 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.457034 kubelet[2667]: E0113 20:35:56.457018 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.457034 kubelet[2667]: W0113 20:35:56.457033 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.457103 kubelet[2667]: E0113 20:35:56.457051 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.457345 kubelet[2667]: E0113 20:35:56.457322 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.457345 kubelet[2667]: W0113 20:35:56.457337 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.457500 kubelet[2667]: E0113 20:35:56.457356 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.457611 kubelet[2667]: E0113 20:35:56.457592 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.457611 kubelet[2667]: W0113 20:35:56.457608 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.457676 kubelet[2667]: E0113 20:35:56.457626 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.457923 kubelet[2667]: E0113 20:35:56.457903 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.457923 kubelet[2667]: W0113 20:35:56.457918 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.458053 kubelet[2667]: E0113 20:35:56.458031 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.458219 kubelet[2667]: E0113 20:35:56.458198 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.458501 kubelet[2667]: W0113 20:35:56.458352 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.458501 kubelet[2667]: E0113 20:35:56.458380 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.458703 kubelet[2667]: E0113 20:35:56.458688 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:35:56.458819 kubelet[2667]: W0113 20:35:56.458777 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:35:56.458819 kubelet[2667]: E0113 20:35:56.458794 2667 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:35:56.484413 systemd[1]: Started cri-containerd-34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756.scope - libcontainer container 34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756. Jan 13 20:35:56.520878 containerd[1484]: time="2025-01-13T20:35:56.520834917Z" level=info msg="StartContainer for \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\" returns successfully" Jan 13 20:35:56.534279 systemd[1]: cri-containerd-34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756.scope: Deactivated successfully. Jan 13 20:35:56.772704 containerd[1484]: time="2025-01-13T20:35:56.772608596Z" level=info msg="shim disconnected" id=34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756 namespace=k8s.io Jan 13 20:35:56.772704 containerd[1484]: time="2025-01-13T20:35:56.772672797Z" level=warning msg="cleaning up after shim disconnected" id=34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756 namespace=k8s.io Jan 13 20:35:56.772704 containerd[1484]: time="2025-01-13T20:35:56.772684740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:57.060500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756-rootfs.mount: Deactivated successfully. Jan 13 20:35:57.272193 kubelet[2667]: E0113 20:35:57.272133 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:35:57.338307 kubelet[2667]: E0113 20:35:57.337639 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:57.339491 containerd[1484]: time="2025-01-13T20:35:57.339317885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:35:57.350953 kubelet[2667]: I0113 20:35:57.350866 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f4f4d5467-rrz4q" podStartSLOduration=3.312769052 podStartE2EDuration="6.350849104s" podCreationTimestamp="2025-01-13 20:35:51 +0000 UTC" firstStartedPulling="2025-01-13 20:35:52.016583526 +0000 UTC m=+20.900406316" lastFinishedPulling="2025-01-13 20:35:55.054663578 +0000 UTC m=+23.938486368" observedRunningTime="2025-01-13 20:35:55.339295552 +0000 UTC m=+24.223118352" watchObservedRunningTime="2025-01-13 20:35:57.350849104 +0000 UTC m=+26.234671894" Jan 13 20:35:59.273005 kubelet[2667]: E0113 20:35:59.272947 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:01.272971 kubelet[2667]: E0113 20:36:01.272926 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:01.669355 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:37492.service - OpenSSH per-connection server daemon (10.0.0.1:37492). Jan 13 20:36:01.708642 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 37492 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:01.710562 sshd-session[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:01.717551 systemd-logind[1468]: New session 8 of user core. Jan 13 20:36:01.725996 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:36:01.860534 sshd[3432]: Connection closed by 10.0.0.1 port 37492 Jan 13 20:36:01.860919 sshd-session[3430]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:01.866372 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:37492.service: Deactivated successfully. Jan 13 20:36:01.868845 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:36:01.870054 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:36:01.871306 systemd-logind[1468]: Removed session 8. Jan 13 20:36:03.128824 containerd[1484]: time="2025-01-13T20:36:03.128759921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:03.131362 containerd[1484]: time="2025-01-13T20:36:03.131322513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:36:03.132753 containerd[1484]: time="2025-01-13T20:36:03.132678379Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:03.135168 containerd[1484]: time="2025-01-13T20:36:03.135130774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:03.135881 containerd[1484]: time="2025-01-13T20:36:03.135844995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.796475653s" Jan 13 20:36:03.135917 containerd[1484]: time="2025-01-13T20:36:03.135882225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:36:03.139138 containerd[1484]: time="2025-01-13T20:36:03.139095419Z" level=info msg="CreateContainer within sandbox \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:36:03.183717 containerd[1484]: time="2025-01-13T20:36:03.183654532Z" level=info msg="CreateContainer within sandbox \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\"" Jan 13 20:36:03.184505 containerd[1484]: time="2025-01-13T20:36:03.184443002Z" level=info msg="StartContainer for \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\"" Jan 13 20:36:03.229394 systemd[1]: Started cri-containerd-bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f.scope - libcontainer container bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f. Jan 13 20:36:03.273344 kubelet[2667]: E0113 20:36:03.273283 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:03.405748 containerd[1484]: time="2025-01-13T20:36:03.405298292Z" level=info msg="StartContainer for \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\" returns successfully" Jan 13 20:36:03.408629 kubelet[2667]: E0113 20:36:03.408601 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:04.316945 containerd[1484]: time="2025-01-13T20:36:04.316894170Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:36:04.319972 systemd[1]: cri-containerd-bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f.scope: Deactivated successfully. Jan 13 20:36:04.320910 kubelet[2667]: I0113 20:36:04.320867 2667 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:36:04.349287 kubelet[2667]: I0113 20:36:04.348654 2667 topology_manager.go:215] "Topology Admit Handler" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" podNamespace="calico-system" podName="calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:04.349287 kubelet[2667]: I0113 20:36:04.349065 2667 topology_manager.go:215] "Topology Admit Handler" podUID="b6de0b00-2ea2-4589-a7b0-2c07a644bac8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:04.349468 kubelet[2667]: I0113 20:36:04.349359 2667 topology_manager.go:215] "Topology Admit Handler" podUID="e2b6f819-04b2-4600-8686-8ab182ac15ce" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j46wk" Jan 13 20:36:04.351585 kubelet[2667]: I0113 20:36:04.351539 2667 topology_manager.go:215] "Topology Admit Handler" podUID="93092177-1d32-4f13-a83c-5cb4c8aca67a" podNamespace="calico-apiserver" podName="calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:04.356634 kubelet[2667]: I0113 20:36:04.355237 2667 topology_manager.go:215] "Topology Admit Handler" podUID="bdcc8529-6e18-4c7a-bfcb-a3677ead2383" podNamespace="calico-apiserver" podName="calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:04.357313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f-rootfs.mount: Deactivated successfully. Jan 13 20:36:04.367385 systemd[1]: Created slice kubepods-burstable-pode2b6f819_04b2_4600_8686_8ab182ac15ce.slice - libcontainer container kubepods-burstable-pode2b6f819_04b2_4600_8686_8ab182ac15ce.slice. Jan 13 20:36:04.374319 systemd[1]: Created slice kubepods-besteffort-podac41f38c_55b9_4a77_8a90_e5737b17fd15.slice - libcontainer container kubepods-besteffort-podac41f38c_55b9_4a77_8a90_e5737b17fd15.slice. Jan 13 20:36:04.380765 systemd[1]: Created slice kubepods-burstable-podb6de0b00_2ea2_4589_a7b0_2c07a644bac8.slice - libcontainer container kubepods-burstable-podb6de0b00_2ea2_4589_a7b0_2c07a644bac8.slice. Jan 13 20:36:04.385864 systemd[1]: Created slice kubepods-besteffort-pod93092177_1d32_4f13_a83c_5cb4c8aca67a.slice - libcontainer container kubepods-besteffort-pod93092177_1d32_4f13_a83c_5cb4c8aca67a.slice. Jan 13 20:36:04.392264 systemd[1]: Created slice kubepods-besteffort-podbdcc8529_6e18_4c7a_bfcb_a3677ead2383.slice - libcontainer container kubepods-besteffort-podbdcc8529_6e18_4c7a_bfcb_a3677ead2383.slice. Jan 13 20:36:04.409399 kubelet[2667]: I0113 20:36:04.409363 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b6f819-04b2-4600-8686-8ab182ac15ce-config-volume\") pod \"coredns-7db6d8ff4d-j46wk\" (UID: \"e2b6f819-04b2-4600-8686-8ab182ac15ce\") " pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:04.409875 kubelet[2667]: I0113 20:36:04.409590 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27brv\" (UniqueName: \"kubernetes.io/projected/bdcc8529-6e18-4c7a-bfcb-a3677ead2383-kube-api-access-27brv\") pod \"calico-apiserver-5d65d67f4f-fz2mt\" (UID: \"bdcc8529-6e18-4c7a-bfcb-a3677ead2383\") " pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:04.409875 kubelet[2667]: I0113 20:36:04.409628 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bdcc8529-6e18-4c7a-bfcb-a3677ead2383-calico-apiserver-certs\") pod \"calico-apiserver-5d65d67f4f-fz2mt\" (UID: \"bdcc8529-6e18-4c7a-bfcb-a3677ead2383\") " pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:04.409875 kubelet[2667]: I0113 20:36:04.409650 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/93092177-1d32-4f13-a83c-5cb4c8aca67a-calico-apiserver-certs\") pod \"calico-apiserver-5d65d67f4f-qmkl8\" (UID: \"93092177-1d32-4f13-a83c-5cb4c8aca67a\") " pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:04.409875 kubelet[2667]: I0113 20:36:04.409674 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6de0b00-2ea2-4589-a7b0-2c07a644bac8-config-volume\") pod \"coredns-7db6d8ff4d-bgl8r\" (UID: \"b6de0b00-2ea2-4589-a7b0-2c07a644bac8\") " pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:04.409875 kubelet[2667]: I0113 20:36:04.409703 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clblm\" (UniqueName: \"kubernetes.io/projected/93092177-1d32-4f13-a83c-5cb4c8aca67a-kube-api-access-clblm\") pod \"calico-apiserver-5d65d67f4f-qmkl8\" (UID: \"93092177-1d32-4f13-a83c-5cb4c8aca67a\") " pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:04.410018 kubelet[2667]: I0113 20:36:04.409721 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wkpn\" (UniqueName: \"kubernetes.io/projected/ac41f38c-55b9-4a77-8a90-e5737b17fd15-kube-api-access-2wkpn\") pod \"calico-kube-controllers-bc744d498-xqhts\" (UID: \"ac41f38c-55b9-4a77-8a90-e5737b17fd15\") " pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:04.410018 kubelet[2667]: I0113 20:36:04.409740 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bhq5\" (UniqueName: \"kubernetes.io/projected/e2b6f819-04b2-4600-8686-8ab182ac15ce-kube-api-access-5bhq5\") pod \"coredns-7db6d8ff4d-j46wk\" (UID: \"e2b6f819-04b2-4600-8686-8ab182ac15ce\") " pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:04.410018 kubelet[2667]: I0113 20:36:04.409756 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5xz\" (UniqueName: \"kubernetes.io/projected/b6de0b00-2ea2-4589-a7b0-2c07a644bac8-kube-api-access-5d5xz\") pod \"coredns-7db6d8ff4d-bgl8r\" (UID: \"b6de0b00-2ea2-4589-a7b0-2c07a644bac8\") " pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:04.410018 kubelet[2667]: I0113 20:36:04.409774 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac41f38c-55b9-4a77-8a90-e5737b17fd15-tigera-ca-bundle\") pod \"calico-kube-controllers-bc744d498-xqhts\" (UID: \"ac41f38c-55b9-4a77-8a90-e5737b17fd15\") " pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:04.410718 kubelet[2667]: E0113 20:36:04.410682 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:04.588787 containerd[1484]: time="2025-01-13T20:36:04.588604277Z" level=info msg="shim disconnected" id=bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f namespace=k8s.io Jan 13 20:36:04.588787 containerd[1484]: time="2025-01-13T20:36:04.588670100Z" level=warning msg="cleaning up after shim disconnected" id=bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f namespace=k8s.io Jan 13 20:36:04.588787 containerd[1484]: time="2025-01-13T20:36:04.588679047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:04.690745 kubelet[2667]: E0113 20:36:04.689996 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:04.690745 kubelet[2667]: E0113 20:36:04.690540 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:04.690948 containerd[1484]: time="2025-01-13T20:36:04.690285415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:36:04.690948 containerd[1484]: time="2025-01-13T20:36:04.690703269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:0,}" Jan 13 20:36:04.691298 containerd[1484]: time="2025-01-13T20:36:04.691206955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:0,}" Jan 13 20:36:04.691458 containerd[1484]: time="2025-01-13T20:36:04.691208668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:0,}" Jan 13 20:36:04.694973 containerd[1484]: time="2025-01-13T20:36:04.694919806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:36:04.881380 containerd[1484]: time="2025-01-13T20:36:04.881231758Z" level=error msg="Failed to destroy network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.881898 containerd[1484]: time="2025-01-13T20:36:04.881635356Z" level=error msg="encountered an error cleaning up failed sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.881898 containerd[1484]: time="2025-01-13T20:36:04.881694527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.882078 kubelet[2667]: E0113 20:36:04.882005 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.882162 kubelet[2667]: E0113 20:36:04.882118 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:04.882162 kubelet[2667]: E0113 20:36:04.882145 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:04.882318 kubelet[2667]: E0113 20:36:04.882194 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" podUID="93092177-1d32-4f13-a83c-5cb4c8aca67a" Jan 13 20:36:04.888132 containerd[1484]: time="2025-01-13T20:36:04.887960101Z" level=error msg="Failed to destroy network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.890394 containerd[1484]: time="2025-01-13T20:36:04.890358184Z" level=error msg="encountered an error cleaning up failed sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.890513 containerd[1484]: time="2025-01-13T20:36:04.890491694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.890813 kubelet[2667]: E0113 20:36:04.890781 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.891819 kubelet[2667]: E0113 20:36:04.890960 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:04.891819 kubelet[2667]: E0113 20:36:04.890990 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:04.891819 kubelet[2667]: E0113 20:36:04.891041 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" Jan 13 20:36:04.892625 containerd[1484]: time="2025-01-13T20:36:04.892591588Z" level=error msg="Failed to destroy network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.893152 containerd[1484]: time="2025-01-13T20:36:04.893122484Z" level=error msg="encountered an error cleaning up failed sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.893515 containerd[1484]: time="2025-01-13T20:36:04.893278617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.893579 kubelet[2667]: E0113 20:36:04.893490 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.893579 kubelet[2667]: E0113 20:36:04.893558 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:04.893767 kubelet[2667]: E0113 20:36:04.893583 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:04.893767 kubelet[2667]: E0113 20:36:04.893629 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46wk" podUID="e2b6f819-04b2-4600-8686-8ab182ac15ce" Jan 13 20:36:04.899454 containerd[1484]: time="2025-01-13T20:36:04.899415630Z" level=error msg="Failed to destroy network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.899916 containerd[1484]: time="2025-01-13T20:36:04.899876806Z" level=error msg="encountered an error cleaning up failed sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.899916 containerd[1484]: time="2025-01-13T20:36:04.899925678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.900311 kubelet[2667]: E0113 20:36:04.900270 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.900397 kubelet[2667]: E0113 20:36:04.900343 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:04.900397 kubelet[2667]: E0113 20:36:04.900365 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:04.900467 kubelet[2667]: E0113 20:36:04.900407 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bgl8r" podUID="b6de0b00-2ea2-4589-a7b0-2c07a644bac8" Jan 13 20:36:04.907755 containerd[1484]: time="2025-01-13T20:36:04.907688052Z" level=error msg="Failed to destroy network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.908199 containerd[1484]: time="2025-01-13T20:36:04.908161201Z" level=error msg="encountered an error cleaning up failed sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.908287 containerd[1484]: time="2025-01-13T20:36:04.908236492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.908512 kubelet[2667]: E0113 20:36:04.908476 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:04.908561 kubelet[2667]: E0113 20:36:04.908522 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:04.908561 kubelet[2667]: E0113 20:36:04.908539 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:04.908615 kubelet[2667]: E0113 20:36:04.908577 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" podUID="bdcc8529-6e18-4c7a-bfcb-a3677ead2383" Jan 13 20:36:05.277920 systemd[1]: Created slice kubepods-besteffort-pod6d5d7d4c_88ab_4857_ac1f_0b6b5fd9a24d.slice - libcontainer container kubepods-besteffort-pod6d5d7d4c_88ab_4857_ac1f_0b6b5fd9a24d.slice. Jan 13 20:36:05.279968 containerd[1484]: time="2025-01-13T20:36:05.279936556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:0,}" Jan 13 20:36:05.381990 containerd[1484]: time="2025-01-13T20:36:05.381929954Z" level=error msg="Failed to destroy network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.382442 containerd[1484]: time="2025-01-13T20:36:05.382368337Z" level=error msg="encountered an error cleaning up failed sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.382442 containerd[1484]: time="2025-01-13T20:36:05.382420645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.382688 kubelet[2667]: E0113 20:36:05.382643 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.382958 kubelet[2667]: E0113 20:36:05.382703 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:05.382958 kubelet[2667]: E0113 20:36:05.382729 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:05.382958 kubelet[2667]: E0113 20:36:05.382776 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:05.384433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec-shm.mount: Deactivated successfully. Jan 13 20:36:05.413402 kubelet[2667]: I0113 20:36:05.413368 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec" Jan 13 20:36:05.414061 containerd[1484]: time="2025-01-13T20:36:05.414019459Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:05.414805 containerd[1484]: time="2025-01-13T20:36:05.414263197Z" level=info msg="Ensure that sandbox b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec in task-service has been cleanup successfully" Jan 13 20:36:05.414805 containerd[1484]: time="2025-01-13T20:36:05.414483010Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:05.414805 containerd[1484]: time="2025-01-13T20:36:05.414494402Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:05.414920 kubelet[2667]: I0113 20:36:05.414474 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8" Jan 13 20:36:05.415308 containerd[1484]: time="2025-01-13T20:36:05.415056407Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:05.415308 containerd[1484]: time="2025-01-13T20:36:05.415268304Z" level=info msg="Ensure that sandbox 167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8 in task-service has been cleanup successfully" Jan 13 20:36:05.415591 containerd[1484]: time="2025-01-13T20:36:05.415422724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:1,}" Jan 13 20:36:05.415713 containerd[1484]: time="2025-01-13T20:36:05.415676682Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:05.415713 containerd[1484]: time="2025-01-13T20:36:05.415691910Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:05.416174 containerd[1484]: time="2025-01-13T20:36:05.416143678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:1,}" Jan 13 20:36:05.416572 systemd[1]: run-netns-cni\x2d44d91470\x2d3703\x2d2c2d\x2d8c09\x2dfb36cee97063.mount: Deactivated successfully. Jan 13 20:36:05.417953 kubelet[2667]: E0113 20:36:05.417920 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:05.419010 containerd[1484]: time="2025-01-13T20:36:05.418972349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:36:05.419886 kubelet[2667]: I0113 20:36:05.419848 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793" Jan 13 20:36:05.421063 containerd[1484]: time="2025-01-13T20:36:05.420985599Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:05.421398 containerd[1484]: time="2025-01-13T20:36:05.421203437Z" level=info msg="Ensure that sandbox 89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793 in task-service has been cleanup successfully" Jan 13 20:36:05.421704 systemd[1]: run-netns-cni\x2df8697200\x2dcb74\x2d9bbc\x2d65aa\x2dc479d60086bb.mount: Deactivated successfully. Jan 13 20:36:05.422303 containerd[1484]: time="2025-01-13T20:36:05.422272334Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:05.422303 containerd[1484]: time="2025-01-13T20:36:05.422299255Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:05.423300 containerd[1484]: time="2025-01-13T20:36:05.423268815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:36:05.424062 kubelet[2667]: I0113 20:36:05.424019 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7" Jan 13 20:36:05.424451 containerd[1484]: time="2025-01-13T20:36:05.424420177Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:05.424605 containerd[1484]: time="2025-01-13T20:36:05.424576610Z" level=info msg="Ensure that sandbox 4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7 in task-service has been cleanup successfully" Jan 13 20:36:05.425677 containerd[1484]: time="2025-01-13T20:36:05.425647731Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:05.425677 containerd[1484]: time="2025-01-13T20:36:05.425667719Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:05.425821 kubelet[2667]: I0113 20:36:05.425765 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b" Jan 13 20:36:05.426548 systemd[1]: run-netns-cni\x2dba7250e0\x2d14a5\x2d712a\x2dd6ab\x2d2927e8967bcc.mount: Deactivated successfully. Jan 13 20:36:05.428311 kubelet[2667]: E0113 20:36:05.428200 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:05.428672 containerd[1484]: time="2025-01-13T20:36:05.428637935Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:05.428672 containerd[1484]: time="2025-01-13T20:36:05.428665847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:1,}" Jan 13 20:36:05.428929 containerd[1484]: time="2025-01-13T20:36:05.428906419Z" level=info msg="Ensure that sandbox fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b in task-service has been cleanup successfully" Jan 13 20:36:05.429393 containerd[1484]: time="2025-01-13T20:36:05.429291653Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:05.429393 containerd[1484]: time="2025-01-13T20:36:05.429319805Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:05.429520 kubelet[2667]: I0113 20:36:05.429347 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d" Jan 13 20:36:05.429655 kubelet[2667]: E0113 20:36:05.429619 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:05.429736 containerd[1484]: time="2025-01-13T20:36:05.429711952Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:05.429924 containerd[1484]: time="2025-01-13T20:36:05.429901587Z" level=info msg="Ensure that sandbox 1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d in task-service has been cleanup successfully" Jan 13 20:36:05.430018 containerd[1484]: time="2025-01-13T20:36:05.429986778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:1,}" Jan 13 20:36:05.430151 containerd[1484]: time="2025-01-13T20:36:05.430127021Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:05.430151 containerd[1484]: time="2025-01-13T20:36:05.430145115Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:05.430600 containerd[1484]: time="2025-01-13T20:36:05.430569131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:36:05.571338 containerd[1484]: time="2025-01-13T20:36:05.571039337Z" level=error msg="Failed to destroy network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.571819 containerd[1484]: time="2025-01-13T20:36:05.571777613Z" level=error msg="encountered an error cleaning up failed sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.571941 containerd[1484]: time="2025-01-13T20:36:05.571920481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.572321 kubelet[2667]: E0113 20:36:05.572274 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.572767 kubelet[2667]: E0113 20:36:05.572653 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:05.572767 kubelet[2667]: E0113 20:36:05.572701 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:05.572942 kubelet[2667]: E0113 20:36:05.572745 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:05.578611 containerd[1484]: time="2025-01-13T20:36:05.578552472Z" level=error msg="Failed to destroy network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.581698 containerd[1484]: time="2025-01-13T20:36:05.581545893Z" level=error msg="encountered an error cleaning up failed sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.581698 containerd[1484]: time="2025-01-13T20:36:05.581604302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.581880 kubelet[2667]: E0113 20:36:05.581818 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.581942 kubelet[2667]: E0113 20:36:05.581896 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:05.581942 kubelet[2667]: E0113 20:36:05.581917 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:05.581993 kubelet[2667]: E0113 20:36:05.581966 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" Jan 13 20:36:05.585709 containerd[1484]: time="2025-01-13T20:36:05.585561141Z" level=error msg="Failed to destroy network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.586425 containerd[1484]: time="2025-01-13T20:36:05.586386711Z" level=error msg="encountered an error cleaning up failed sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.586489 containerd[1484]: time="2025-01-13T20:36:05.586455500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.587933 kubelet[2667]: E0113 20:36:05.586703 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.587933 kubelet[2667]: E0113 20:36:05.586771 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:05.587933 kubelet[2667]: E0113 20:36:05.586792 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:05.588055 kubelet[2667]: E0113 20:36:05.586850 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" podUID="bdcc8529-6e18-4c7a-bfcb-a3677ead2383" Jan 13 20:36:05.589207 containerd[1484]: time="2025-01-13T20:36:05.589170457Z" level=error msg="Failed to destroy network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.589762 containerd[1484]: time="2025-01-13T20:36:05.589709179Z" level=error msg="encountered an error cleaning up failed sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.589992 containerd[1484]: time="2025-01-13T20:36:05.589793437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.590050 kubelet[2667]: E0113 20:36:05.590021 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.590116 kubelet[2667]: E0113 20:36:05.590063 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:05.590116 kubelet[2667]: E0113 20:36:05.590090 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:05.590190 kubelet[2667]: E0113 20:36:05.590129 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bgl8r" podUID="b6de0b00-2ea2-4589-a7b0-2c07a644bac8" Jan 13 20:36:05.601062 containerd[1484]: time="2025-01-13T20:36:05.601019013Z" level=error msg="Failed to destroy network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.601651 containerd[1484]: time="2025-01-13T20:36:05.601625962Z" level=error msg="encountered an error cleaning up failed sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.601698 containerd[1484]: time="2025-01-13T20:36:05.601680604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.601900 kubelet[2667]: E0113 20:36:05.601859 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.601971 kubelet[2667]: E0113 20:36:05.601919 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:05.601971 kubelet[2667]: E0113 20:36:05.601942 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:05.602025 kubelet[2667]: E0113 20:36:05.601989 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46wk" podUID="e2b6f819-04b2-4600-8686-8ab182ac15ce" Jan 13 20:36:05.602204 containerd[1484]: time="2025-01-13T20:36:05.602149995Z" level=error msg="Failed to destroy network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.602604 containerd[1484]: time="2025-01-13T20:36:05.602578389Z" level=error msg="encountered an error cleaning up failed sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.602675 containerd[1484]: time="2025-01-13T20:36:05.602634865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.602865 kubelet[2667]: E0113 20:36:05.602824 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:05.603006 kubelet[2667]: E0113 20:36:05.602973 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:05.603006 kubelet[2667]: E0113 20:36:05.602992 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:05.603092 kubelet[2667]: E0113 20:36:05.603034 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" podUID="93092177-1d32-4f13-a83c-5cb4c8aca67a" Jan 13 20:36:06.358686 systemd[1]: run-netns-cni\x2d815eb5ee\x2dc41f\x2d14c6\x2d1849\x2d9e69ff1124fb.mount: Deactivated successfully. Jan 13 20:36:06.358816 systemd[1]: run-netns-cni\x2dbd8d9299\x2d05c6\x2d46e9\x2da41d\x2dc46e5ee72166.mount: Deactivated successfully. Jan 13 20:36:06.358904 systemd[1]: run-netns-cni\x2dcdf8b35b\x2d2f5d\x2dd4ec\x2db381\x2d0470ab15ffd4.mount: Deactivated successfully. Jan 13 20:36:06.432488 kubelet[2667]: I0113 20:36:06.432444 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a" Jan 13 20:36:06.433306 containerd[1484]: time="2025-01-13T20:36:06.433131516Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:06.436307 containerd[1484]: time="2025-01-13T20:36:06.433368271Z" level=info msg="Ensure that sandbox 0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a in task-service has been cleanup successfully" Jan 13 20:36:06.436307 containerd[1484]: time="2025-01-13T20:36:06.433681628Z" level=info msg="TearDown network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" successfully" Jan 13 20:36:06.436307 containerd[1484]: time="2025-01-13T20:36:06.433692829Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" returns successfully" Jan 13 20:36:06.436645 systemd[1]: run-netns-cni\x2d6ffd6535\x2dc877\x2d5712\x2d9189\x2d8fa914d50812.mount: Deactivated successfully. Jan 13 20:36:06.436985 containerd[1484]: time="2025-01-13T20:36:06.436942771Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:06.437134 containerd[1484]: time="2025-01-13T20:36:06.437079697Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:06.437134 containerd[1484]: time="2025-01-13T20:36:06.437099394Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:06.437965 kubelet[2667]: I0113 20:36:06.437428 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef" Jan 13 20:36:06.437965 kubelet[2667]: E0113 20:36:06.437487 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:06.440481 containerd[1484]: time="2025-01-13T20:36:06.438436164Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:06.440481 containerd[1484]: time="2025-01-13T20:36:06.438670474Z" level=info msg="Ensure that sandbox dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef in task-service has been cleanup successfully" Jan 13 20:36:06.440481 containerd[1484]: time="2025-01-13T20:36:06.438903251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:2,}" Jan 13 20:36:06.440754 containerd[1484]: time="2025-01-13T20:36:06.440717035Z" level=info msg="TearDown network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" successfully" Jan 13 20:36:06.440754 containerd[1484]: time="2025-01-13T20:36:06.440747003Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" returns successfully" Jan 13 20:36:06.441225 containerd[1484]: time="2025-01-13T20:36:06.441194423Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:06.441748 containerd[1484]: time="2025-01-13T20:36:06.441419565Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:06.441748 containerd[1484]: time="2025-01-13T20:36:06.441438761Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:06.442141 kubelet[2667]: I0113 20:36:06.441445 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd" Jan 13 20:36:06.443279 containerd[1484]: time="2025-01-13T20:36:06.442274319Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:06.443279 containerd[1484]: time="2025-01-13T20:36:06.442352937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:36:06.443279 containerd[1484]: time="2025-01-13T20:36:06.442524820Z" level=info msg="Ensure that sandbox efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd in task-service has been cleanup successfully" Jan 13 20:36:06.443279 containerd[1484]: time="2025-01-13T20:36:06.443262294Z" level=info msg="TearDown network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" successfully" Jan 13 20:36:06.443279 containerd[1484]: time="2025-01-13T20:36:06.443278624Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" returns successfully" Jan 13 20:36:06.443232 systemd[1]: run-netns-cni\x2d4652f110\x2d02bf\x2d6db6\x2d176a\x2d412f7a08fd36.mount: Deactivated successfully. Jan 13 20:36:06.443535 containerd[1484]: time="2025-01-13T20:36:06.443507364Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:06.443641 containerd[1484]: time="2025-01-13T20:36:06.443614535Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:06.443641 containerd[1484]: time="2025-01-13T20:36:06.443636136Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:06.444868 containerd[1484]: time="2025-01-13T20:36:06.444819027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:36:06.445460 kubelet[2667]: I0113 20:36:06.445398 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689" Jan 13 20:36:06.446457 containerd[1484]: time="2025-01-13T20:36:06.446429410Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:06.446651 containerd[1484]: time="2025-01-13T20:36:06.446630016Z" level=info msg="Ensure that sandbox 8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689 in task-service has been cleanup successfully" Jan 13 20:36:06.447043 systemd[1]: run-netns-cni\x2d8abb0e42\x2dd7c3\x2dff7c\x2d8909\x2dd4792e9cd2e0.mount: Deactivated successfully. Jan 13 20:36:06.447181 containerd[1484]: time="2025-01-13T20:36:06.447120567Z" level=info msg="TearDown network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" successfully" Jan 13 20:36:06.447228 containerd[1484]: time="2025-01-13T20:36:06.447179197Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" returns successfully" Jan 13 20:36:06.447469 kubelet[2667]: I0113 20:36:06.447439 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38" Jan 13 20:36:06.449195 containerd[1484]: time="2025-01-13T20:36:06.449160567Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:06.449418 containerd[1484]: time="2025-01-13T20:36:06.449385228Z" level=info msg="Ensure that sandbox 8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38 in task-service has been cleanup successfully" Jan 13 20:36:06.449692 containerd[1484]: time="2025-01-13T20:36:06.449655015Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:06.449766 containerd[1484]: time="2025-01-13T20:36:06.449698847Z" level=info msg="TearDown network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" successfully" Jan 13 20:36:06.449829 containerd[1484]: time="2025-01-13T20:36:06.449761525Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:06.449829 containerd[1484]: time="2025-01-13T20:36:06.449771664Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" returns successfully" Jan 13 20:36:06.449906 containerd[1484]: time="2025-01-13T20:36:06.449777344Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:06.450205 systemd[1]: run-netns-cni\x2d925212f0\x2d53a1\x2d1a3b\x2d5191\x2d10e45205bbc8.mount: Deactivated successfully. Jan 13 20:36:06.450914 containerd[1484]: time="2025-01-13T20:36:06.450550045Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:06.450914 containerd[1484]: time="2025-01-13T20:36:06.450683255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:2,}" Jan 13 20:36:06.450914 containerd[1484]: time="2025-01-13T20:36:06.450771391Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:06.450914 containerd[1484]: time="2025-01-13T20:36:06.450834850Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:06.451570 kubelet[2667]: I0113 20:36:06.451185 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912" Jan 13 20:36:06.451610 containerd[1484]: time="2025-01-13T20:36:06.451394891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:2,}" Jan 13 20:36:06.452008 containerd[1484]: time="2025-01-13T20:36:06.451975321Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:06.452194 containerd[1484]: time="2025-01-13T20:36:06.452156161Z" level=info msg="Ensure that sandbox cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912 in task-service has been cleanup successfully" Jan 13 20:36:06.452420 containerd[1484]: time="2025-01-13T20:36:06.452381052Z" level=info msg="TearDown network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" successfully" Jan 13 20:36:06.452420 containerd[1484]: time="2025-01-13T20:36:06.452410958Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" returns successfully" Jan 13 20:36:06.453010 containerd[1484]: time="2025-01-13T20:36:06.452972904Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:06.453168 containerd[1484]: time="2025-01-13T20:36:06.453127233Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:06.453168 containerd[1484]: time="2025-01-13T20:36:06.453146199Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:06.453436 kubelet[2667]: E0113 20:36:06.453415 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:06.453735 containerd[1484]: time="2025-01-13T20:36:06.453694638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:2,}" Jan 13 20:36:06.628424 containerd[1484]: time="2025-01-13T20:36:06.626429774Z" level=error msg="Failed to destroy network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.628424 containerd[1484]: time="2025-01-13T20:36:06.626912019Z" level=error msg="encountered an error cleaning up failed sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.628424 containerd[1484]: time="2025-01-13T20:36:06.626971631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.628613 kubelet[2667]: E0113 20:36:06.627374 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.628613 kubelet[2667]: E0113 20:36:06.627441 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:06.628613 kubelet[2667]: E0113 20:36:06.627468 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:06.628706 kubelet[2667]: E0113 20:36:06.627533 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" podUID="93092177-1d32-4f13-a83c-5cb4c8aca67a" Jan 13 20:36:06.628812 containerd[1484]: time="2025-01-13T20:36:06.628753646Z" level=error msg="Failed to destroy network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.629766 containerd[1484]: time="2025-01-13T20:36:06.629625964Z" level=error msg="Failed to destroy network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.630234 containerd[1484]: time="2025-01-13T20:36:06.630207977Z" level=error msg="encountered an error cleaning up failed sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.630444 containerd[1484]: time="2025-01-13T20:36:06.630380260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.630733 kubelet[2667]: E0113 20:36:06.630703 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.630800 kubelet[2667]: E0113 20:36:06.630737 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:06.630800 kubelet[2667]: E0113 20:36:06.630754 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:06.630886 kubelet[2667]: E0113 20:36:06.630799 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46wk" podUID="e2b6f819-04b2-4600-8686-8ab182ac15ce" Jan 13 20:36:06.631311 containerd[1484]: time="2025-01-13T20:36:06.631139506Z" level=error msg="encountered an error cleaning up failed sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.631311 containerd[1484]: time="2025-01-13T20:36:06.631204928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.631486 kubelet[2667]: E0113 20:36:06.631434 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.631661 kubelet[2667]: E0113 20:36:06.631506 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:06.631661 kubelet[2667]: E0113 20:36:06.631532 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:06.631661 kubelet[2667]: E0113 20:36:06.631587 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" Jan 13 20:36:06.636361 containerd[1484]: time="2025-01-13T20:36:06.635314523Z" level=error msg="Failed to destroy network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.636361 containerd[1484]: time="2025-01-13T20:36:06.636328457Z" level=error msg="encountered an error cleaning up failed sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.636473 containerd[1484]: time="2025-01-13T20:36:06.636388239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.637056 kubelet[2667]: E0113 20:36:06.636653 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.637056 kubelet[2667]: E0113 20:36:06.636713 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:06.637056 kubelet[2667]: E0113 20:36:06.636739 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:06.637290 kubelet[2667]: E0113 20:36:06.636785 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" podUID="bdcc8529-6e18-4c7a-bfcb-a3677ead2383" Jan 13 20:36:06.644206 containerd[1484]: time="2025-01-13T20:36:06.644160059Z" level=error msg="Failed to destroy network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.644656 containerd[1484]: time="2025-01-13T20:36:06.644617207Z" level=error msg="encountered an error cleaning up failed sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.644721 containerd[1484]: time="2025-01-13T20:36:06.644688241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.644918 kubelet[2667]: E0113 20:36:06.644884 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.644988 kubelet[2667]: E0113 20:36:06.644938 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:06.644988 kubelet[2667]: E0113 20:36:06.644963 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:06.645034 kubelet[2667]: E0113 20:36:06.645007 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bgl8r" podUID="b6de0b00-2ea2-4589-a7b0-2c07a644bac8" Jan 13 20:36:06.649329 containerd[1484]: time="2025-01-13T20:36:06.649289058Z" level=error msg="Failed to destroy network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.649868 containerd[1484]: time="2025-01-13T20:36:06.649832588Z" level=error msg="encountered an error cleaning up failed sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.649907 containerd[1484]: time="2025-01-13T20:36:06.649893031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.650068 kubelet[2667]: E0113 20:36:06.650038 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:06.650101 kubelet[2667]: E0113 20:36:06.650078 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:06.650126 kubelet[2667]: E0113 20:36:06.650097 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:06.650158 kubelet[2667]: E0113 20:36:06.650130 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:06.874648 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:37504.service - OpenSSH per-connection server daemon (10.0.0.1:37504). Jan 13 20:36:06.978371 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 37504 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:06.981146 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:06.988402 systemd-logind[1468]: New session 9 of user core. Jan 13 20:36:07.000495 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:36:07.165867 sshd[4190]: Connection closed by 10.0.0.1 port 37504 Jan 13 20:36:07.166581 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:07.182061 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:37504.service: Deactivated successfully. Jan 13 20:36:07.192116 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:36:07.193113 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:36:07.195692 systemd-logind[1468]: Removed session 9. Jan 13 20:36:07.358323 systemd[1]: run-netns-cni\x2d20d02bb7\x2de08b\x2d915d\x2db952\x2d5edaaf0e1548.mount: Deactivated successfully. Jan 13 20:36:07.358440 systemd[1]: run-netns-cni\x2dda6762bf\x2deca4\x2d6683\x2dd442\x2d81b51b1f88ff.mount: Deactivated successfully. Jan 13 20:36:07.432224 kubelet[2667]: I0113 20:36:07.432166 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:36:07.432940 kubelet[2667]: E0113 20:36:07.432846 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:07.455411 kubelet[2667]: I0113 20:36:07.455373 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b" Jan 13 20:36:07.456280 containerd[1484]: time="2025-01-13T20:36:07.456215052Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" Jan 13 20:36:07.456786 containerd[1484]: time="2025-01-13T20:36:07.456468438Z" level=info msg="Ensure that sandbox ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b in task-service has been cleanup successfully" Jan 13 20:36:07.456958 containerd[1484]: time="2025-01-13T20:36:07.456885280Z" level=info msg="TearDown network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" successfully" Jan 13 20:36:07.456958 containerd[1484]: time="2025-01-13T20:36:07.456903555Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" returns successfully" Jan 13 20:36:07.457392 containerd[1484]: time="2025-01-13T20:36:07.457359721Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:07.457623 containerd[1484]: time="2025-01-13T20:36:07.457466120Z" level=info msg="TearDown network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" successfully" Jan 13 20:36:07.457623 containerd[1484]: time="2025-01-13T20:36:07.457484084Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" returns successfully" Jan 13 20:36:07.457956 containerd[1484]: time="2025-01-13T20:36:07.457929520Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:07.458094 containerd[1484]: time="2025-01-13T20:36:07.458029097Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:07.458094 containerd[1484]: time="2025-01-13T20:36:07.458042622Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:07.459040 containerd[1484]: time="2025-01-13T20:36:07.459015679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:36:07.459347 systemd[1]: run-netns-cni\x2db547e7fe\x2dfe92\x2d5889\x2d7c75\x2d22b789b71a5c.mount: Deactivated successfully. Jan 13 20:36:07.460339 kubelet[2667]: I0113 20:36:07.459547 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1" Jan 13 20:36:07.461060 containerd[1484]: time="2025-01-13T20:36:07.460963947Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" Jan 13 20:36:07.462340 containerd[1484]: time="2025-01-13T20:36:07.461306169Z" level=info msg="Ensure that sandbox 8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1 in task-service has been cleanup successfully" Jan 13 20:36:07.464777 containerd[1484]: time="2025-01-13T20:36:07.464738522Z" level=info msg="TearDown network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" successfully" Jan 13 20:36:07.464846 containerd[1484]: time="2025-01-13T20:36:07.464764411Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" returns successfully" Jan 13 20:36:07.465181 containerd[1484]: time="2025-01-13T20:36:07.465146829Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:07.465391 containerd[1484]: time="2025-01-13T20:36:07.465293414Z" level=info msg="TearDown network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" successfully" Jan 13 20:36:07.465391 containerd[1484]: time="2025-01-13T20:36:07.465305366Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" returns successfully" Jan 13 20:36:07.465567 systemd[1]: run-netns-cni\x2d352c4426\x2da4a4\x2d255a\x2d148f\x2d08812e44dbff.mount: Deactivated successfully. Jan 13 20:36:07.466146 containerd[1484]: time="2025-01-13T20:36:07.465896195Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:07.466146 containerd[1484]: time="2025-01-13T20:36:07.465995110Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:07.466146 containerd[1484]: time="2025-01-13T20:36:07.466011281Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:07.466620 kubelet[2667]: I0113 20:36:07.466597 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3" Jan 13 20:36:07.467346 containerd[1484]: time="2025-01-13T20:36:07.466713790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:3,}" Jan 13 20:36:07.467817 containerd[1484]: time="2025-01-13T20:36:07.467239486Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" Jan 13 20:36:07.467817 containerd[1484]: time="2025-01-13T20:36:07.467647182Z" level=info msg="Ensure that sandbox e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3 in task-service has been cleanup successfully" Jan 13 20:36:07.468035 containerd[1484]: time="2025-01-13T20:36:07.467961472Z" level=info msg="TearDown network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" successfully" Jan 13 20:36:07.468035 containerd[1484]: time="2025-01-13T20:36:07.467983143Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" returns successfully" Jan 13 20:36:07.469425 containerd[1484]: time="2025-01-13T20:36:07.469387840Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:07.470605 containerd[1484]: time="2025-01-13T20:36:07.469525007Z" level=info msg="TearDown network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" successfully" Jan 13 20:36:07.470605 containerd[1484]: time="2025-01-13T20:36:07.469548351Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" returns successfully" Jan 13 20:36:07.470605 containerd[1484]: time="2025-01-13T20:36:07.470133679Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:07.470605 containerd[1484]: time="2025-01-13T20:36:07.470386825Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:07.470605 containerd[1484]: time="2025-01-13T20:36:07.470399348Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:07.470964 containerd[1484]: time="2025-01-13T20:36:07.470943069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:3,}" Jan 13 20:36:07.471393 kubelet[2667]: I0113 20:36:07.471350 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c" Jan 13 20:36:07.471828 containerd[1484]: time="2025-01-13T20:36:07.471806460Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" Jan 13 20:36:07.471969 containerd[1484]: time="2025-01-13T20:36:07.471943276Z" level=info msg="Ensure that sandbox 21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c in task-service has been cleanup successfully" Jan 13 20:36:07.472433 containerd[1484]: time="2025-01-13T20:36:07.472358836Z" level=info msg="TearDown network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" successfully" Jan 13 20:36:07.472433 containerd[1484]: time="2025-01-13T20:36:07.472377621Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" returns successfully" Jan 13 20:36:07.472807 containerd[1484]: time="2025-01-13T20:36:07.472742356Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:07.472901 containerd[1484]: time="2025-01-13T20:36:07.472848846Z" level=info msg="TearDown network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" successfully" Jan 13 20:36:07.472901 containerd[1484]: time="2025-01-13T20:36:07.472862722Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" returns successfully" Jan 13 20:36:07.473174 systemd[1]: run-netns-cni\x2dfba26c56\x2d42d9\x2dfdda\x2d5ade\x2d025ab453697a.mount: Deactivated successfully. Jan 13 20:36:07.474532 containerd[1484]: time="2025-01-13T20:36:07.474392344Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:07.474532 containerd[1484]: time="2025-01-13T20:36:07.474473697Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:07.474532 containerd[1484]: time="2025-01-13T20:36:07.474483465Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:07.475908 kubelet[2667]: I0113 20:36:07.475189 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c" Jan 13 20:36:07.475966 containerd[1484]: time="2025-01-13T20:36:07.475221730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:36:07.475995 systemd[1]: run-netns-cni\x2dcf30fffe\x2df2c3\x2d1d12\x2da6bd\x2db7fbac58df3e.mount: Deactivated successfully. Jan 13 20:36:07.476502 containerd[1484]: time="2025-01-13T20:36:07.476213091Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" Jan 13 20:36:07.476502 containerd[1484]: time="2025-01-13T20:36:07.476457921Z" level=info msg="Ensure that sandbox 398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c in task-service has been cleanup successfully" Jan 13 20:36:07.476860 containerd[1484]: time="2025-01-13T20:36:07.476817546Z" level=info msg="TearDown network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" successfully" Jan 13 20:36:07.476860 containerd[1484]: time="2025-01-13T20:36:07.476841151Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" returns successfully" Jan 13 20:36:07.477429 containerd[1484]: time="2025-01-13T20:36:07.477176400Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:07.477429 containerd[1484]: time="2025-01-13T20:36:07.477351038Z" level=info msg="TearDown network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" successfully" Jan 13 20:36:07.477429 containerd[1484]: time="2025-01-13T20:36:07.477368600Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" returns successfully" Jan 13 20:36:07.478067 containerd[1484]: time="2025-01-13T20:36:07.477604654Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:07.478067 containerd[1484]: time="2025-01-13T20:36:07.477710953Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:07.478067 containerd[1484]: time="2025-01-13T20:36:07.477723988Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:07.478184 kubelet[2667]: E0113 20:36:07.477916 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:07.478184 kubelet[2667]: I0113 20:36:07.477960 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef" Jan 13 20:36:07.478508 containerd[1484]: time="2025-01-13T20:36:07.478481750Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" Jan 13 20:36:07.478640 kubelet[2667]: E0113 20:36:07.478545 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:07.478734 containerd[1484]: time="2025-01-13T20:36:07.478643574Z" level=info msg="Ensure that sandbox e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef in task-service has been cleanup successfully" Jan 13 20:36:07.478963 containerd[1484]: time="2025-01-13T20:36:07.478934830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:3,}" Jan 13 20:36:07.479177 containerd[1484]: time="2025-01-13T20:36:07.479153671Z" level=info msg="TearDown network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" successfully" Jan 13 20:36:07.479177 containerd[1484]: time="2025-01-13T20:36:07.479173899Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" returns successfully" Jan 13 20:36:07.479463 containerd[1484]: time="2025-01-13T20:36:07.479437815Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:07.479548 containerd[1484]: time="2025-01-13T20:36:07.479528986Z" level=info msg="TearDown network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" successfully" Jan 13 20:36:07.479548 containerd[1484]: time="2025-01-13T20:36:07.479544095Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" returns successfully" Jan 13 20:36:07.479874 containerd[1484]: time="2025-01-13T20:36:07.479840120Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:07.480056 containerd[1484]: time="2025-01-13T20:36:07.479973160Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:07.480092 containerd[1484]: time="2025-01-13T20:36:07.480051787Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:07.480424 kubelet[2667]: E0113 20:36:07.480312 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:07.480582 containerd[1484]: time="2025-01-13T20:36:07.480544482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:3,}" Jan 13 20:36:07.744146 containerd[1484]: time="2025-01-13T20:36:07.744100757Z" level=error msg="Failed to destroy network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.744489 containerd[1484]: time="2025-01-13T20:36:07.744467015Z" level=error msg="encountered an error cleaning up failed sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.744541 containerd[1484]: time="2025-01-13T20:36:07.744524372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.744818 kubelet[2667]: E0113 20:36:07.744778 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.744939 kubelet[2667]: E0113 20:36:07.744843 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:07.744939 kubelet[2667]: E0113 20:36:07.744869 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:07.744994 kubelet[2667]: E0113 20:36:07.744923 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" podUID="93092177-1d32-4f13-a83c-5cb4c8aca67a" Jan 13 20:36:07.885493 containerd[1484]: time="2025-01-13T20:36:07.885434031Z" level=error msg="Failed to destroy network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.887583 containerd[1484]: time="2025-01-13T20:36:07.887521861Z" level=error msg="encountered an error cleaning up failed sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.887672 containerd[1484]: time="2025-01-13T20:36:07.887605267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.888064 kubelet[2667]: E0113 20:36:07.887864 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.888064 kubelet[2667]: E0113 20:36:07.887931 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:07.888064 kubelet[2667]: E0113 20:36:07.887957 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:07.888459 kubelet[2667]: E0113 20:36:07.888360 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:07.965606 containerd[1484]: time="2025-01-13T20:36:07.965306926Z" level=error msg="Failed to destroy network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.966165 containerd[1484]: time="2025-01-13T20:36:07.966112497Z" level=error msg="encountered an error cleaning up failed sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.969846 containerd[1484]: time="2025-01-13T20:36:07.969819165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.971775 kubelet[2667]: E0113 20:36:07.971713 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.971859 kubelet[2667]: E0113 20:36:07.971802 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:07.971859 kubelet[2667]: E0113 20:36:07.971829 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:07.971933 kubelet[2667]: E0113 20:36:07.971882 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" podUID="bdcc8529-6e18-4c7a-bfcb-a3677ead2383" Jan 13 20:36:07.976433 containerd[1484]: time="2025-01-13T20:36:07.976379010Z" level=error msg="Failed to destroy network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.977564 containerd[1484]: time="2025-01-13T20:36:07.976885661Z" level=error msg="encountered an error cleaning up failed sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.977564 containerd[1484]: time="2025-01-13T20:36:07.976963066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.977787 kubelet[2667]: E0113 20:36:07.977208 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.977787 kubelet[2667]: E0113 20:36:07.977294 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:07.977787 kubelet[2667]: E0113 20:36:07.977322 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:07.977932 kubelet[2667]: E0113 20:36:07.977466 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46wk" podUID="e2b6f819-04b2-4600-8686-8ab182ac15ce" Jan 13 20:36:07.985637 containerd[1484]: time="2025-01-13T20:36:07.985559733Z" level=error msg="Failed to destroy network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.985977 containerd[1484]: time="2025-01-13T20:36:07.985942191Z" level=error msg="Failed to destroy network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.986195 containerd[1484]: time="2025-01-13T20:36:07.986154219Z" level=error msg="encountered an error cleaning up failed sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.986321 containerd[1484]: time="2025-01-13T20:36:07.986233608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.986429 containerd[1484]: time="2025-01-13T20:36:07.986322274Z" level=error msg="encountered an error cleaning up failed sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.986429 containerd[1484]: time="2025-01-13T20:36:07.986368792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.986686 kubelet[2667]: E0113 20:36:07.986634 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.986778 kubelet[2667]: E0113 20:36:07.986721 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:07.986778 kubelet[2667]: E0113 20:36:07.986747 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:07.986850 kubelet[2667]: E0113 20:36:07.986807 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" Jan 13 20:36:07.986850 kubelet[2667]: E0113 20:36:07.986624 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:07.986952 kubelet[2667]: E0113 20:36:07.986870 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:07.986952 kubelet[2667]: E0113 20:36:07.986889 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:07.987036 kubelet[2667]: E0113 20:36:07.986962 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bgl8r" podUID="b6de0b00-2ea2-4589-a7b0-2c07a644bac8" Jan 13 20:36:08.360037 systemd[1]: run-netns-cni\x2d7ee0301a\x2d4895\x2dc940\x2d4349\x2d6047ccb7f8ae.mount: Deactivated successfully. Jan 13 20:36:08.360148 systemd[1]: run-netns-cni\x2d853ec585\x2dbd9b\x2d904e\x2d1762\x2dc322b7526fb0.mount: Deactivated successfully. Jan 13 20:36:08.482135 kubelet[2667]: I0113 20:36:08.482102 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff" Jan 13 20:36:08.483527 containerd[1484]: time="2025-01-13T20:36:08.483023284Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\"" Jan 13 20:36:08.483527 containerd[1484]: time="2025-01-13T20:36:08.483301856Z" level=info msg="Ensure that sandbox 6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff in task-service has been cleanup successfully" Jan 13 20:36:08.483853 containerd[1484]: time="2025-01-13T20:36:08.483689314Z" level=info msg="TearDown network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" successfully" Jan 13 20:36:08.483853 containerd[1484]: time="2025-01-13T20:36:08.483706336Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" returns successfully" Jan 13 20:36:08.486393 containerd[1484]: time="2025-01-13T20:36:08.484778718Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" Jan 13 20:36:08.486393 containerd[1484]: time="2025-01-13T20:36:08.484892462Z" level=info msg="TearDown network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" successfully" Jan 13 20:36:08.486393 containerd[1484]: time="2025-01-13T20:36:08.484906939Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" returns successfully" Jan 13 20:36:08.486624 systemd[1]: run-netns-cni\x2d2e0f6bc7\x2dc9bc\x2d92ac\x2d9f82\x2d99ac63d6944b.mount: Deactivated successfully. Jan 13 20:36:08.486919 containerd[1484]: time="2025-01-13T20:36:08.486882817Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:08.487012 containerd[1484]: time="2025-01-13T20:36:08.486988647Z" level=info msg="TearDown network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" successfully" Jan 13 20:36:08.487055 containerd[1484]: time="2025-01-13T20:36:08.487009276Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" returns successfully" Jan 13 20:36:08.487593 containerd[1484]: time="2025-01-13T20:36:08.487263462Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:08.488373 containerd[1484]: time="2025-01-13T20:36:08.488335665Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:08.488373 containerd[1484]: time="2025-01-13T20:36:08.488361844Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:08.489471 kubelet[2667]: E0113 20:36:08.488518 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:08.489522 containerd[1484]: time="2025-01-13T20:36:08.489456729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:4,}" Jan 13 20:36:08.489871 kubelet[2667]: I0113 20:36:08.489716 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f" Jan 13 20:36:08.490273 containerd[1484]: time="2025-01-13T20:36:08.490228958Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\"" Jan 13 20:36:08.490902 containerd[1484]: time="2025-01-13T20:36:08.490496180Z" level=info msg="Ensure that sandbox 7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f in task-service has been cleanup successfully" Jan 13 20:36:08.490902 containerd[1484]: time="2025-01-13T20:36:08.490785283Z" level=info msg="TearDown network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" successfully" Jan 13 20:36:08.490902 containerd[1484]: time="2025-01-13T20:36:08.490799619Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" returns successfully" Jan 13 20:36:08.492364 containerd[1484]: time="2025-01-13T20:36:08.492109699Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" Jan 13 20:36:08.492364 containerd[1484]: time="2025-01-13T20:36:08.492221158Z" level=info msg="TearDown network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" successfully" Jan 13 20:36:08.492364 containerd[1484]: time="2025-01-13T20:36:08.492232269Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" returns successfully" Jan 13 20:36:08.495529 containerd[1484]: time="2025-01-13T20:36:08.494195975Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:08.495529 containerd[1484]: time="2025-01-13T20:36:08.494499725Z" level=info msg="TearDown network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" successfully" Jan 13 20:36:08.495529 containerd[1484]: time="2025-01-13T20:36:08.494513240Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" returns successfully" Jan 13 20:36:08.495529 containerd[1484]: time="2025-01-13T20:36:08.495049497Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:08.495529 containerd[1484]: time="2025-01-13T20:36:08.495173880Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:08.495529 containerd[1484]: time="2025-01-13T20:36:08.495185843Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:08.493781 systemd[1]: run-netns-cni\x2d1707e048\x2ddf73\x2d1413\x2ddbe0\x2dcb16ec81627a.mount: Deactivated successfully. Jan 13 20:36:08.496279 containerd[1484]: time="2025-01-13T20:36:08.496186260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:36:08.496774 kubelet[2667]: I0113 20:36:08.496644 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2" Jan 13 20:36:08.498220 containerd[1484]: time="2025-01-13T20:36:08.498164984Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\"" Jan 13 20:36:08.499294 containerd[1484]: time="2025-01-13T20:36:08.499266672Z" level=info msg="Ensure that sandbox c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2 in task-service has been cleanup successfully" Jan 13 20:36:08.512463 containerd[1484]: time="2025-01-13T20:36:08.512347225Z" level=info msg="TearDown network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" successfully" Jan 13 20:36:08.512463 containerd[1484]: time="2025-01-13T20:36:08.512380077Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" returns successfully" Jan 13 20:36:08.513498 systemd[1]: run-netns-cni\x2d1cf8bbb4\x2d6746\x2de4a4\x2de186\x2d9f4e695341e8.mount: Deactivated successfully. Jan 13 20:36:08.514340 containerd[1484]: time="2025-01-13T20:36:08.514296905Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" Jan 13 20:36:08.514464 containerd[1484]: time="2025-01-13T20:36:08.514412642Z" level=info msg="TearDown network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" successfully" Jan 13 20:36:08.514464 containerd[1484]: time="2025-01-13T20:36:08.514437338Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" returns successfully" Jan 13 20:36:08.514808 containerd[1484]: time="2025-01-13T20:36:08.514779450Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:08.514920 containerd[1484]: time="2025-01-13T20:36:08.514899285Z" level=info msg="TearDown network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" successfully" Jan 13 20:36:08.514920 containerd[1484]: time="2025-01-13T20:36:08.514917139Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" returns successfully" Jan 13 20:36:08.515662 containerd[1484]: time="2025-01-13T20:36:08.515421315Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:08.517338 kubelet[2667]: I0113 20:36:08.517302 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d" Jan 13 20:36:08.535523 kubelet[2667]: I0113 20:36:08.535480 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017" Jan 13 20:36:08.538044 containerd[1484]: time="2025-01-13T20:36:08.515520321Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:08.538184 containerd[1484]: time="2025-01-13T20:36:08.538161920Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:08.538375 containerd[1484]: time="2025-01-13T20:36:08.536279055Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\"" Jan 13 20:36:08.538479 containerd[1484]: time="2025-01-13T20:36:08.518321108Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\"" Jan 13 20:36:08.538728 containerd[1484]: time="2025-01-13T20:36:08.538702915Z" level=info msg="Ensure that sandbox dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d in task-service has been cleanup successfully" Jan 13 20:36:08.538927 containerd[1484]: time="2025-01-13T20:36:08.538425986Z" level=info msg="Ensure that sandbox 340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017 in task-service has been cleanup successfully" Jan 13 20:36:08.539151 containerd[1484]: time="2025-01-13T20:36:08.539092386Z" level=info msg="TearDown network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" successfully" Jan 13 20:36:08.539151 containerd[1484]: time="2025-01-13T20:36:08.539115910Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" returns successfully" Jan 13 20:36:08.539281 containerd[1484]: time="2025-01-13T20:36:08.539234322Z" level=info msg="TearDown network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" successfully" Jan 13 20:36:08.539435 containerd[1484]: time="2025-01-13T20:36:08.539356362Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" returns successfully" Jan 13 20:36:08.540210 containerd[1484]: time="2025-01-13T20:36:08.539764698Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" Jan 13 20:36:08.540791 containerd[1484]: time="2025-01-13T20:36:08.539847794Z" level=info msg="TearDown network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" successfully" Jan 13 20:36:08.540856 containerd[1484]: time="2025-01-13T20:36:08.540840919Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" returns successfully" Jan 13 20:36:08.540954 containerd[1484]: time="2025-01-13T20:36:08.540586210Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" Jan 13 20:36:08.541072 containerd[1484]: time="2025-01-13T20:36:08.541057685Z" level=info msg="TearDown network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" successfully" Jan 13 20:36:08.541128 containerd[1484]: time="2025-01-13T20:36:08.541116505Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" returns successfully" Jan 13 20:36:08.541298 containerd[1484]: time="2025-01-13T20:36:08.541279872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:4,}" Jan 13 20:36:08.541511 containerd[1484]: time="2025-01-13T20:36:08.541470049Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\"" Jan 13 20:36:08.542786 kubelet[2667]: I0113 20:36:08.540722 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1" Jan 13 20:36:08.543284 containerd[1484]: time="2025-01-13T20:36:08.543090671Z" level=info msg="Ensure that sandbox bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1 in task-service has been cleanup successfully" Jan 13 20:36:08.543710 containerd[1484]: time="2025-01-13T20:36:08.543692370Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:08.544636 containerd[1484]: time="2025-01-13T20:36:08.544543047Z" level=info msg="TearDown network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" successfully" Jan 13 20:36:08.544804 containerd[1484]: time="2025-01-13T20:36:08.544727443Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" returns successfully" Jan 13 20:36:08.545068 containerd[1484]: time="2025-01-13T20:36:08.543795814Z" level=info msg="TearDown network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" successfully" Jan 13 20:36:08.545068 containerd[1484]: time="2025-01-13T20:36:08.544907612Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" returns successfully" Jan 13 20:36:08.545068 containerd[1484]: time="2025-01-13T20:36:08.543831853Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:08.545068 containerd[1484]: time="2025-01-13T20:36:08.545002239Z" level=info msg="TearDown network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" successfully" Jan 13 20:36:08.545068 containerd[1484]: time="2025-01-13T20:36:08.545011727Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" returns successfully" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545567941Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545598218Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545653852Z" level=info msg="TearDown network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" successfully" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545663690Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" returns successfully" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545698415Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545712842Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545725226Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545779158Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:08.545825 containerd[1484]: time="2025-01-13T20:36:08.545790008Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:08.546754 containerd[1484]: time="2025-01-13T20:36:08.546208523Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:08.546754 containerd[1484]: time="2025-01-13T20:36:08.546330593Z" level=info msg="TearDown network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" successfully" Jan 13 20:36:08.546754 containerd[1484]: time="2025-01-13T20:36:08.546341984Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" returns successfully" Jan 13 20:36:08.546754 containerd[1484]: time="2025-01-13T20:36:08.546429348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:36:08.546754 containerd[1484]: time="2025-01-13T20:36:08.546699915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:4,}" Jan 13 20:36:08.547327 containerd[1484]: time="2025-01-13T20:36:08.547294522Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:08.547412 containerd[1484]: time="2025-01-13T20:36:08.547389190Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:08.547412 containerd[1484]: time="2025-01-13T20:36:08.547401833Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:08.547626 kubelet[2667]: E0113 20:36:08.547600 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:08.548552 containerd[1484]: time="2025-01-13T20:36:08.548519030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:4,}" Jan 13 20:36:08.660754 containerd[1484]: time="2025-01-13T20:36:08.660580853Z" level=error msg="Failed to destroy network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.661474 containerd[1484]: time="2025-01-13T20:36:08.661162845Z" level=error msg="encountered an error cleaning up failed sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.662177 containerd[1484]: time="2025-01-13T20:36:08.661994286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.662662 kubelet[2667]: E0113 20:36:08.662337 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.662662 kubelet[2667]: E0113 20:36:08.662410 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:08.662662 kubelet[2667]: E0113 20:36:08.662439 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:08.664615 kubelet[2667]: E0113 20:36:08.662581 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bgl8r" podUID="b6de0b00-2ea2-4589-a7b0-2c07a644bac8" Jan 13 20:36:08.665616 containerd[1484]: time="2025-01-13T20:36:08.665580717Z" level=error msg="Failed to destroy network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.666532 containerd[1484]: time="2025-01-13T20:36:08.666503339Z" level=error msg="encountered an error cleaning up failed sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.667353 containerd[1484]: time="2025-01-13T20:36:08.667321725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.667662 kubelet[2667]: E0113 20:36:08.667607 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.668651 kubelet[2667]: E0113 20:36:08.668387 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:08.668651 kubelet[2667]: E0113 20:36:08.668422 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:08.668651 kubelet[2667]: E0113 20:36:08.668507 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" podUID="93092177-1d32-4f13-a83c-5cb4c8aca67a" Jan 13 20:36:08.700392 containerd[1484]: time="2025-01-13T20:36:08.700321777Z" level=error msg="Failed to destroy network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.701327 containerd[1484]: time="2025-01-13T20:36:08.701235472Z" level=error msg="encountered an error cleaning up failed sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.701413 containerd[1484]: time="2025-01-13T20:36:08.701334578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.701649 kubelet[2667]: E0113 20:36:08.701579 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.701903 kubelet[2667]: E0113 20:36:08.701880 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:08.701990 kubelet[2667]: E0113 20:36:08.701974 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:08.702105 kubelet[2667]: E0113 20:36:08.702080 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" Jan 13 20:36:08.731270 containerd[1484]: time="2025-01-13T20:36:08.731158938Z" level=error msg="Failed to destroy network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.731931 containerd[1484]: time="2025-01-13T20:36:08.731816744Z" level=error msg="encountered an error cleaning up failed sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.732260 containerd[1484]: time="2025-01-13T20:36:08.732204271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.732717 kubelet[2667]: E0113 20:36:08.732667 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.732784 kubelet[2667]: E0113 20:36:08.732752 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:08.732784 kubelet[2667]: E0113 20:36:08.732779 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:08.732853 kubelet[2667]: E0113 20:36:08.732824 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:08.733481 containerd[1484]: time="2025-01-13T20:36:08.733439229Z" level=error msg="Failed to destroy network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.733939 containerd[1484]: time="2025-01-13T20:36:08.733915252Z" level=error msg="encountered an error cleaning up failed sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.734062 containerd[1484]: time="2025-01-13T20:36:08.734041769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.734504 kubelet[2667]: E0113 20:36:08.734352 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.734504 kubelet[2667]: E0113 20:36:08.734397 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:08.734504 kubelet[2667]: E0113 20:36:08.734414 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:08.734614 kubelet[2667]: E0113 20:36:08.734445 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" podUID="bdcc8529-6e18-4c7a-bfcb-a3677ead2383" Jan 13 20:36:08.740388 containerd[1484]: time="2025-01-13T20:36:08.740348908Z" level=error msg="Failed to destroy network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.740924 containerd[1484]: time="2025-01-13T20:36:08.740838978Z" level=error msg="encountered an error cleaning up failed sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.740924 containerd[1484]: time="2025-01-13T20:36:08.740906785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.741176 kubelet[2667]: E0113 20:36:08.741130 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:08.741224 kubelet[2667]: E0113 20:36:08.741206 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:08.741375 kubelet[2667]: E0113 20:36:08.741232 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:08.741375 kubelet[2667]: E0113 20:36:08.741323 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46wk" podUID="e2b6f819-04b2-4600-8686-8ab182ac15ce" Jan 13 20:36:09.358211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b-shm.mount: Deactivated successfully. Jan 13 20:36:09.358345 systemd[1]: run-netns-cni\x2d75cd7da9\x2d0d91\x2dd640\x2d73d7\x2de3f51ccfeede.mount: Deactivated successfully. Jan 13 20:36:09.358421 systemd[1]: run-netns-cni\x2d1c5b3620\x2d3279\x2d12da\x2d7088\x2dc958d81e9a2c.mount: Deactivated successfully. Jan 13 20:36:09.358494 systemd[1]: run-netns-cni\x2d7c1d06ac\x2dab69\x2df878\x2d58fa\x2d053a779c6e4a.mount: Deactivated successfully. Jan 13 20:36:09.547311 kubelet[2667]: I0113 20:36:09.547271 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb" Jan 13 20:36:09.548370 containerd[1484]: time="2025-01-13T20:36:09.548211152Z" level=info msg="StopPodSandbox for \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\"" Jan 13 20:36:09.548658 containerd[1484]: time="2025-01-13T20:36:09.548486559Z" level=info msg="Ensure that sandbox 96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb in task-service has been cleanup successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.548781983Z" level=info msg="TearDown network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.548802732Z" level=info msg="StopPodSandbox for \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" returns successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.549599177Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\"" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.549707270Z" level=info msg="TearDown network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.549721507Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" returns successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.550894157Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.552522894Z" level=info msg="TearDown network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.552547701Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" returns successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.552838517Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.552926452Z" level=info msg="TearDown network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" successfully" Jan 13 20:36:09.554515 containerd[1484]: time="2025-01-13T20:36:09.552946770Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" returns successfully" Jan 13 20:36:09.554874 kubelet[2667]: I0113 20:36:09.553364 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79" Jan 13 20:36:09.552750 systemd[1]: run-netns-cni\x2dca12f93f\x2db6e6\x2d36df\x2da89c\x2dc3a7821c4dac.mount: Deactivated successfully. Jan 13 20:36:09.556235 containerd[1484]: time="2025-01-13T20:36:09.555366222Z" level=info msg="StopPodSandbox for \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\"" Jan 13 20:36:09.556235 containerd[1484]: time="2025-01-13T20:36:09.555563461Z" level=info msg="Ensure that sandbox 24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79 in task-service has been cleanup successfully" Jan 13 20:36:09.556235 containerd[1484]: time="2025-01-13T20:36:09.555720265Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:09.556235 containerd[1484]: time="2025-01-13T20:36:09.556108885Z" level=info msg="TearDown network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" successfully" Jan 13 20:36:09.556235 containerd[1484]: time="2025-01-13T20:36:09.556135255Z" level=info msg="StopPodSandbox for \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" returns successfully" Jan 13 20:36:09.557685 containerd[1484]: time="2025-01-13T20:36:09.557614251Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:09.557685 containerd[1484]: time="2025-01-13T20:36:09.557638737Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:09.559268 containerd[1484]: time="2025-01-13T20:36:09.558720357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:36:09.559268 containerd[1484]: time="2025-01-13T20:36:09.558983481Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\"" Jan 13 20:36:09.559268 containerd[1484]: time="2025-01-13T20:36:09.559167296Z" level=info msg="TearDown network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" successfully" Jan 13 20:36:09.559268 containerd[1484]: time="2025-01-13T20:36:09.559184147Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" returns successfully" Jan 13 20:36:09.559812 containerd[1484]: time="2025-01-13T20:36:09.559664689Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" Jan 13 20:36:09.559862 systemd[1]: run-netns-cni\x2da068603b\x2d263f\x2d5d1c\x2d3300\x2ddccbbfb7425f.mount: Deactivated successfully. Jan 13 20:36:09.561096 containerd[1484]: time="2025-01-13T20:36:09.560595887Z" level=info msg="TearDown network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" successfully" Jan 13 20:36:09.561340 containerd[1484]: time="2025-01-13T20:36:09.561231550Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" returns successfully" Jan 13 20:36:09.562518 containerd[1484]: time="2025-01-13T20:36:09.562382841Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:09.562943 containerd[1484]: time="2025-01-13T20:36:09.562878531Z" level=info msg="TearDown network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" successfully" Jan 13 20:36:09.562943 containerd[1484]: time="2025-01-13T20:36:09.562934476Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" returns successfully" Jan 13 20:36:09.563765 kubelet[2667]: I0113 20:36:09.563550 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b" Jan 13 20:36:09.567768 kubelet[2667]: I0113 20:36:09.567743 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4" Jan 13 20:36:09.568186 containerd[1484]: time="2025-01-13T20:36:09.568126902Z" level=info msg="StopPodSandbox for \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\"" Jan 13 20:36:09.569282 containerd[1484]: time="2025-01-13T20:36:09.568366441Z" level=info msg="Ensure that sandbox e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4 in task-service has been cleanup successfully" Jan 13 20:36:09.570899 systemd[1]: run-netns-cni\x2dc6b57b87\x2d53d2\x2d7799\x2da5cf\x2d84610c26b681.mount: Deactivated successfully. Jan 13 20:36:09.571353 containerd[1484]: time="2025-01-13T20:36:09.571327649Z" level=info msg="TearDown network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" successfully" Jan 13 20:36:09.571455 containerd[1484]: time="2025-01-13T20:36:09.571351354Z" level=info msg="StopPodSandbox for \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" returns successfully" Jan 13 20:36:09.571668 containerd[1484]: time="2025-01-13T20:36:09.571590303Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\"" Jan 13 20:36:09.571734 containerd[1484]: time="2025-01-13T20:36:09.571685902Z" level=info msg="TearDown network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" successfully" Jan 13 20:36:09.571734 containerd[1484]: time="2025-01-13T20:36:09.571696101Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" returns successfully" Jan 13 20:36:09.572238 containerd[1484]: time="2025-01-13T20:36:09.572203874Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" Jan 13 20:36:09.572343 containerd[1484]: time="2025-01-13T20:36:09.572320793Z" level=info msg="TearDown network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" successfully" Jan 13 20:36:09.572343 containerd[1484]: time="2025-01-13T20:36:09.572339448Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" returns successfully" Jan 13 20:36:09.573142 containerd[1484]: time="2025-01-13T20:36:09.573109012Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:09.573232 containerd[1484]: time="2025-01-13T20:36:09.573204953Z" level=info msg="TearDown network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" successfully" Jan 13 20:36:09.573232 containerd[1484]: time="2025-01-13T20:36:09.573223839Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" returns successfully" Jan 13 20:36:09.573433 kubelet[2667]: I0113 20:36:09.573404 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f" Jan 13 20:36:09.573823 containerd[1484]: time="2025-01-13T20:36:09.573787837Z" level=info msg="StopPodSandbox for \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\"" Jan 13 20:36:09.574007 containerd[1484]: time="2025-01-13T20:36:09.573976801Z" level=info msg="Ensure that sandbox 6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f in task-service has been cleanup successfully" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.574171266Z" level=info msg="TearDown network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" successfully" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.574190502Z" level=info msg="StopPodSandbox for \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" returns successfully" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.574273579Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.574364640Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.574377323Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.574805627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.574814825Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\"" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.575142770Z" level=info msg="TearDown network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" successfully" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.575153650Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" returns successfully" Jan 13 20:36:09.575609 containerd[1484]: time="2025-01-13T20:36:09.575613363Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" Jan 13 20:36:09.575977 containerd[1484]: time="2025-01-13T20:36:09.575696218Z" level=info msg="TearDown network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" successfully" Jan 13 20:36:09.575977 containerd[1484]: time="2025-01-13T20:36:09.575710255Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" returns successfully" Jan 13 20:36:09.576210 containerd[1484]: time="2025-01-13T20:36:09.576173815Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:09.576327 systemd[1]: run-netns-cni\x2d15804c3c\x2d9c19\x2d7323\x2da10e\x2d7747365c8ea3.mount: Deactivated successfully. Jan 13 20:36:09.577199 containerd[1484]: time="2025-01-13T20:36:09.576444553Z" level=info msg="TearDown network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" successfully" Jan 13 20:36:09.577199 containerd[1484]: time="2025-01-13T20:36:09.576463589Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" returns successfully" Jan 13 20:36:09.579944 kubelet[2667]: I0113 20:36:09.579891 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877" Jan 13 20:36:09.580472 containerd[1484]: time="2025-01-13T20:36:09.580444019Z" level=info msg="StopPodSandbox for \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\"" Jan 13 20:36:09.580757 containerd[1484]: time="2025-01-13T20:36:09.580713335Z" level=info msg="Ensure that sandbox e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877 in task-service has been cleanup successfully" Jan 13 20:36:09.580988 containerd[1484]: time="2025-01-13T20:36:09.580958295Z" level=info msg="TearDown network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" successfully" Jan 13 20:36:09.580988 containerd[1484]: time="2025-01-13T20:36:09.580980436Z" level=info msg="StopPodSandbox for \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" returns successfully" Jan 13 20:36:09.581676 containerd[1484]: time="2025-01-13T20:36:09.581644984Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\"" Jan 13 20:36:09.581766 containerd[1484]: time="2025-01-13T20:36:09.581742166Z" level=info msg="TearDown network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" successfully" Jan 13 20:36:09.581766 containerd[1484]: time="2025-01-13T20:36:09.581757886Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" returns successfully" Jan 13 20:36:09.581946 containerd[1484]: time="2025-01-13T20:36:09.581927023Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" Jan 13 20:36:09.582012 containerd[1484]: time="2025-01-13T20:36:09.581995672Z" level=info msg="TearDown network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" successfully" Jan 13 20:36:09.582012 containerd[1484]: time="2025-01-13T20:36:09.582009237Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" returns successfully" Jan 13 20:36:10.238655 containerd[1484]: time="2025-01-13T20:36:10.238589049Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:10.239309 containerd[1484]: time="2025-01-13T20:36:10.238995913Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:10.239309 containerd[1484]: time="2025-01-13T20:36:10.239053441Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:10.239309 containerd[1484]: time="2025-01-13T20:36:10.238688736Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:10.239309 containerd[1484]: time="2025-01-13T20:36:10.239183655Z" level=info msg="TearDown network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" successfully" Jan 13 20:36:10.239309 containerd[1484]: time="2025-01-13T20:36:10.239192652Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" returns successfully" Jan 13 20:36:10.239309 containerd[1484]: time="2025-01-13T20:36:10.238729943Z" level=info msg="StopPodSandbox for \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\"" Jan 13 20:36:10.239667 containerd[1484]: time="2025-01-13T20:36:10.239646233Z" level=info msg="Ensure that sandbox a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b in task-service has been cleanup successfully" Jan 13 20:36:10.240233 containerd[1484]: time="2025-01-13T20:36:10.240208789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:5,}" Jan 13 20:36:10.240389 containerd[1484]: time="2025-01-13T20:36:10.240349994Z" level=info msg="TearDown network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" successfully" Jan 13 20:36:10.240469 containerd[1484]: time="2025-01-13T20:36:10.240433079Z" level=info msg="StopPodSandbox for \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" returns successfully" Jan 13 20:36:10.240469 containerd[1484]: time="2025-01-13T20:36:10.239701527Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:10.240734 containerd[1484]: time="2025-01-13T20:36:10.240500337Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:10.240734 containerd[1484]: time="2025-01-13T20:36:10.240535973Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:10.240734 containerd[1484]: time="2025-01-13T20:36:10.240544880Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:10.240734 containerd[1484]: time="2025-01-13T20:36:10.240585196Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:10.240734 containerd[1484]: time="2025-01-13T20:36:10.240648935Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:10.241319 containerd[1484]: time="2025-01-13T20:36:10.240943317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:5,}" Jan 13 20:36:10.241319 containerd[1484]: time="2025-01-13T20:36:10.241082178Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\"" Jan 13 20:36:10.241319 containerd[1484]: time="2025-01-13T20:36:10.241158321Z" level=info msg="TearDown network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" successfully" Jan 13 20:36:10.241319 containerd[1484]: time="2025-01-13T20:36:10.241169332Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" returns successfully" Jan 13 20:36:10.241319 containerd[1484]: time="2025-01-13T20:36:10.241291260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:5,}" Jan 13 20:36:10.242085 kubelet[2667]: E0113 20:36:10.240827 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:10.242126 containerd[1484]: time="2025-01-13T20:36:10.241357665Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" Jan 13 20:36:10.242126 containerd[1484]: time="2025-01-13T20:36:10.241429179Z" level=info msg="TearDown network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" successfully" Jan 13 20:36:10.242126 containerd[1484]: time="2025-01-13T20:36:10.241437835Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" returns successfully" Jan 13 20:36:10.242126 containerd[1484]: time="2025-01-13T20:36:10.241691502Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:10.242126 containerd[1484]: time="2025-01-13T20:36:10.241842776Z" level=info msg="TearDown network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" successfully" Jan 13 20:36:10.242126 containerd[1484]: time="2025-01-13T20:36:10.241855690Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" returns successfully" Jan 13 20:36:10.242874 containerd[1484]: time="2025-01-13T20:36:10.242367340Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:10.242874 containerd[1484]: time="2025-01-13T20:36:10.242447861Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:10.242874 containerd[1484]: time="2025-01-13T20:36:10.242458180Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:10.242874 containerd[1484]: time="2025-01-13T20:36:10.242839647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:5,}" Jan 13 20:36:10.242996 kubelet[2667]: E0113 20:36:10.242628 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:10.356484 systemd[1]: run-netns-cni\x2d9a176eed\x2d674d\x2d732f\x2d4eda\x2df53261c791a5.mount: Deactivated successfully. Jan 13 20:36:10.356762 systemd[1]: run-netns-cni\x2d3f0a7913\x2da31a\x2d0628\x2dafd3\x2d15d8ded5cc49.mount: Deactivated successfully. Jan 13 20:36:11.135719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596087388.mount: Deactivated successfully. Jan 13 20:36:12.178518 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:57132.service - OpenSSH per-connection server daemon (10.0.0.1:57132). Jan 13 20:36:12.586792 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 57132 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:12.589762 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:12.594863 systemd-logind[1468]: New session 10 of user core. Jan 13 20:36:12.603485 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:36:12.925367 sshd[4651]: Connection closed by 10.0.0.1 port 57132 Jan 13 20:36:12.929205 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:12.935358 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:57132.service: Deactivated successfully. Jan 13 20:36:12.938438 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:36:12.939276 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:36:12.952700 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:57144.service - OpenSSH per-connection server daemon (10.0.0.1:57144). Jan 13 20:36:12.953275 systemd-logind[1468]: Removed session 10. Jan 13 20:36:13.012513 sshd[4667]: Accepted publickey for core from 10.0.0.1 port 57144 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:13.014485 sshd-session[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:13.019386 systemd-logind[1468]: New session 11 of user core. Jan 13 20:36:13.028683 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:36:13.136416 containerd[1484]: time="2025-01-13T20:36:13.136348211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:13.152489 containerd[1484]: time="2025-01-13T20:36:13.152433705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:36:13.171124 containerd[1484]: time="2025-01-13T20:36:13.171052292Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:13.200051 containerd[1484]: time="2025-01-13T20:36:13.199864958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:13.209275 containerd[1484]: time="2025-01-13T20:36:13.208163350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.789145906s" Jan 13 20:36:13.209275 containerd[1484]: time="2025-01-13T20:36:13.209191589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:36:13.217292 sshd[4669]: Connection closed by 10.0.0.1 port 57144 Jan 13 20:36:13.216925 sshd-session[4667]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:13.226417 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:57144.service: Deactivated successfully. Jan 13 20:36:13.229146 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:36:13.231748 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:36:13.233804 containerd[1484]: time="2025-01-13T20:36:13.231749067Z" level=info msg="CreateContainer within sandbox \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:36:13.246048 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:57152.service - OpenSSH per-connection server daemon (10.0.0.1:57152). Jan 13 20:36:13.251144 systemd-logind[1468]: Removed session 11. Jan 13 20:36:13.341283 containerd[1484]: time="2025-01-13T20:36:13.339623970Z" level=info msg="CreateContainer within sandbox \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\"" Jan 13 20:36:13.343514 containerd[1484]: time="2025-01-13T20:36:13.343473464Z" level=info msg="StartContainer for \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\"" Jan 13 20:36:13.367869 sshd[4772]: Accepted publickey for core from 10.0.0.1 port 57152 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:13.370687 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:13.373043 containerd[1484]: time="2025-01-13T20:36:13.372591373Z" level=error msg="Failed to destroy network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.378318 containerd[1484]: time="2025-01-13T20:36:13.375130727Z" level=error msg="encountered an error cleaning up failed sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.378768 containerd[1484]: time="2025-01-13T20:36:13.378657765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.385383 kubelet[2667]: E0113 20:36:13.385022 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.385383 kubelet[2667]: E0113 20:36:13.385192 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:13.385383 kubelet[2667]: E0113 20:36:13.385282 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46wk" Jan 13 20:36:13.386988 kubelet[2667]: E0113 20:36:13.385353 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j46wk_kube-system(e2b6f819-04b2-4600-8686-8ab182ac15ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46wk" podUID="e2b6f819-04b2-4600-8686-8ab182ac15ce" Jan 13 20:36:13.390088 systemd-logind[1468]: New session 12 of user core. Jan 13 20:36:13.394809 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:36:13.465337 containerd[1484]: time="2025-01-13T20:36:13.465157759Z" level=error msg="Failed to destroy network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.469600 containerd[1484]: time="2025-01-13T20:36:13.469199003Z" level=error msg="encountered an error cleaning up failed sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.469835 containerd[1484]: time="2025-01-13T20:36:13.469758773Z" level=error msg="Failed to destroy network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.469951 containerd[1484]: time="2025-01-13T20:36:13.469889549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472223 containerd[1484]: time="2025-01-13T20:36:13.470398003Z" level=error msg="encountered an error cleaning up failed sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472223 containerd[1484]: time="2025-01-13T20:36:13.471083449Z" level=error msg="Failed to destroy network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472223 containerd[1484]: time="2025-01-13T20:36:13.471673597Z" level=error msg="encountered an error cleaning up failed sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472223 containerd[1484]: time="2025-01-13T20:36:13.471724522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472223 containerd[1484]: time="2025-01-13T20:36:13.471163830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472474 kubelet[2667]: E0113 20:36:13.471471 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472474 kubelet[2667]: E0113 20:36:13.471565 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:13.472474 kubelet[2667]: E0113 20:36:13.471650 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbrd5" Jan 13 20:36:13.472589 kubelet[2667]: E0113 20:36:13.471712 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbrd5_calico-system(6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbrd5" podUID="6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d" Jan 13 20:36:13.472589 kubelet[2667]: E0113 20:36:13.472124 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472589 kubelet[2667]: E0113 20:36:13.472154 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:13.472723 kubelet[2667]: E0113 20:36:13.472174 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bgl8r" Jan 13 20:36:13.472723 kubelet[2667]: E0113 20:36:13.472204 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bgl8r_kube-system(b6de0b00-2ea2-4589-a7b0-2c07a644bac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bgl8r" podUID="b6de0b00-2ea2-4589-a7b0-2c07a644bac8" Jan 13 20:36:13.472723 kubelet[2667]: E0113 20:36:13.472239 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.472840 kubelet[2667]: E0113 20:36:13.472278 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:13.472840 kubelet[2667]: E0113 20:36:13.472295 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" Jan 13 20:36:13.472840 kubelet[2667]: E0113 20:36:13.472324 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-fz2mt_calico-apiserver(bdcc8529-6e18-4c7a-bfcb-a3677ead2383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" podUID="bdcc8529-6e18-4c7a-bfcb-a3677ead2383" Jan 13 20:36:13.486269 containerd[1484]: time="2025-01-13T20:36:13.485421945Z" level=error msg="Failed to destroy network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.486269 containerd[1484]: time="2025-01-13T20:36:13.486200856Z" level=error msg="encountered an error cleaning up failed sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.487545 containerd[1484]: time="2025-01-13T20:36:13.487333071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.487935 kubelet[2667]: E0113 20:36:13.487756 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.487935 kubelet[2667]: E0113 20:36:13.487862 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:13.487935 kubelet[2667]: E0113 20:36:13.487887 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" Jan 13 20:36:13.488156 kubelet[2667]: E0113 20:36:13.487962 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65d67f4f-qmkl8_calico-apiserver(93092177-1d32-4f13-a83c-5cb4c8aca67a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" podUID="93092177-1d32-4f13-a83c-5cb4c8aca67a" Jan 13 20:36:13.499325 containerd[1484]: time="2025-01-13T20:36:13.498838871Z" level=error msg="Failed to destroy network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.499728 containerd[1484]: time="2025-01-13T20:36:13.499696460Z" level=error msg="encountered an error cleaning up failed sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.500837 containerd[1484]: time="2025-01-13T20:36:13.500801443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.501281 kubelet[2667]: E0113 20:36:13.501191 2667 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:36:13.501398 kubelet[2667]: E0113 20:36:13.501369 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:13.501442 kubelet[2667]: E0113 20:36:13.501408 2667 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" Jan 13 20:36:13.501594 kubelet[2667]: E0113 20:36:13.501487 2667 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc744d498-xqhts_calico-system(ac41f38c-55b9-4a77-8a90-e5737b17fd15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" Jan 13 20:36:13.514453 systemd[1]: Started cri-containerd-6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8.scope - libcontainer container 6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8. Jan 13 20:36:13.633707 containerd[1484]: time="2025-01-13T20:36:13.633591599Z" level=info msg="StartContainer for \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\" returns successfully" Jan 13 20:36:13.634880 sshd[4864]: Connection closed by 10.0.0.1 port 57152 Jan 13 20:36:13.636858 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:13.639947 kubelet[2667]: I0113 20:36:13.639909 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc" Jan 13 20:36:13.640382 containerd[1484]: time="2025-01-13T20:36:13.640344922Z" level=info msg="StopPodSandbox for \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\"" Jan 13 20:36:13.640579 containerd[1484]: time="2025-01-13T20:36:13.640549005Z" level=info msg="Ensure that sandbox 5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc in task-service has been cleanup successfully" Jan 13 20:36:13.641433 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:57152.service: Deactivated successfully. Jan 13 20:36:13.643539 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:36:13.644898 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:36:13.646270 systemd-logind[1468]: Removed session 12. Jan 13 20:36:13.646768 kubelet[2667]: I0113 20:36:13.646574 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256" Jan 13 20:36:13.646821 containerd[1484]: time="2025-01-13T20:36:13.646750411Z" level=info msg="StopPodSandbox for \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\"" Jan 13 20:36:13.647090 containerd[1484]: time="2025-01-13T20:36:13.646915682Z" level=info msg="Ensure that sandbox 9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256 in task-service has been cleanup successfully" Jan 13 20:36:13.648447 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:36:13.648520 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:36:13.648604 containerd[1484]: time="2025-01-13T20:36:13.648569545Z" level=info msg="TearDown network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\" successfully" Jan 13 20:36:13.648604 containerd[1484]: time="2025-01-13T20:36:13.648592608Z" level=info msg="StopPodSandbox for \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\" returns successfully" Jan 13 20:36:13.648695 containerd[1484]: time="2025-01-13T20:36:13.648588190Z" level=info msg="TearDown network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\" successfully" Jan 13 20:36:13.648695 containerd[1484]: time="2025-01-13T20:36:13.648649325Z" level=info msg="StopPodSandbox for \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\" returns successfully" Jan 13 20:36:13.648952 containerd[1484]: time="2025-01-13T20:36:13.648922286Z" level=info msg="StopPodSandbox for \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\"" Jan 13 20:36:13.648993 containerd[1484]: time="2025-01-13T20:36:13.648956070Z" level=info msg="StopPodSandbox for \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\"" Jan 13 20:36:13.649020 containerd[1484]: time="2025-01-13T20:36:13.649006245Z" level=info msg="TearDown network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" successfully" Jan 13 20:36:13.649020 containerd[1484]: time="2025-01-13T20:36:13.649016805Z" level=info msg="StopPodSandbox for \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" returns successfully" Jan 13 20:36:13.649071 containerd[1484]: time="2025-01-13T20:36:13.649041571Z" level=info msg="TearDown network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" successfully" Jan 13 20:36:13.649071 containerd[1484]: time="2025-01-13T20:36:13.649053313Z" level=info msg="StopPodSandbox for \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" returns successfully" Jan 13 20:36:13.649294 kubelet[2667]: I0113 20:36:13.649272 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c" Jan 13 20:36:13.649697 containerd[1484]: time="2025-01-13T20:36:13.649657195Z" level=info msg="StopPodSandbox for \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\"" Jan 13 20:36:13.649859 containerd[1484]: time="2025-01-13T20:36:13.649829810Z" level=info msg="Ensure that sandbox 48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c in task-service has been cleanup successfully" Jan 13 20:36:13.650169 containerd[1484]: time="2025-01-13T20:36:13.650129843Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\"" Jan 13 20:36:13.650507 containerd[1484]: time="2025-01-13T20:36:13.650422663Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\"" Jan 13 20:36:13.650507 containerd[1484]: time="2025-01-13T20:36:13.650445385Z" level=info msg="TearDown network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" successfully" Jan 13 20:36:13.650507 containerd[1484]: time="2025-01-13T20:36:13.650463259Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" returns successfully" Jan 13 20:36:13.650591 containerd[1484]: time="2025-01-13T20:36:13.650569598Z" level=info msg="TearDown network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" successfully" Jan 13 20:36:13.650591 containerd[1484]: time="2025-01-13T20:36:13.650580969Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" returns successfully" Jan 13 20:36:13.650800 containerd[1484]: time="2025-01-13T20:36:13.650717525Z" level=info msg="TearDown network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\" successfully" Jan 13 20:36:13.650800 containerd[1484]: time="2025-01-13T20:36:13.650732844Z" level=info msg="StopPodSandbox for \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\" returns successfully" Jan 13 20:36:13.651274 containerd[1484]: time="2025-01-13T20:36:13.651075347Z" level=info msg="StopPodSandbox for \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\"" Jan 13 20:36:13.651274 containerd[1484]: time="2025-01-13T20:36:13.651120302Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" Jan 13 20:36:13.651274 containerd[1484]: time="2025-01-13T20:36:13.651167811Z" level=info msg="TearDown network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" successfully" Jan 13 20:36:13.651274 containerd[1484]: time="2025-01-13T20:36:13.651178220Z" level=info msg="StopPodSandbox for \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" returns successfully" Jan 13 20:36:13.651274 containerd[1484]: time="2025-01-13T20:36:13.651079726Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" Jan 13 20:36:13.651274 containerd[1484]: time="2025-01-13T20:36:13.651212124Z" level=info msg="TearDown network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" successfully" Jan 13 20:36:13.651274 containerd[1484]: time="2025-01-13T20:36:13.651225840Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" returns successfully" Jan 13 20:36:13.651448 containerd[1484]: time="2025-01-13T20:36:13.651290892Z" level=info msg="TearDown network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" successfully" Jan 13 20:36:13.651448 containerd[1484]: time="2025-01-13T20:36:13.651301882Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" returns successfully" Jan 13 20:36:13.651560 containerd[1484]: time="2025-01-13T20:36:13.651535761Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\"" Jan 13 20:36:13.651684 containerd[1484]: time="2025-01-13T20:36:13.651609299Z" level=info msg="TearDown network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" successfully" Jan 13 20:36:13.651684 containerd[1484]: time="2025-01-13T20:36:13.651618997Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" returns successfully" Jan 13 20:36:13.651851 containerd[1484]: time="2025-01-13T20:36:13.651831115Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:13.651922 containerd[1484]: time="2025-01-13T20:36:13.651905715Z" level=info msg="TearDown network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" successfully" Jan 13 20:36:13.651922 containerd[1484]: time="2025-01-13T20:36:13.651918149Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" returns successfully" Jan 13 20:36:13.651983 containerd[1484]: time="2025-01-13T20:36:13.651959326Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:13.652071 containerd[1484]: time="2025-01-13T20:36:13.652029978Z" level=info msg="TearDown network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" successfully" Jan 13 20:36:13.652071 containerd[1484]: time="2025-01-13T20:36:13.652044576Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" returns successfully" Jan 13 20:36:13.652126 containerd[1484]: time="2025-01-13T20:36:13.652075103Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" Jan 13 20:36:13.652149 containerd[1484]: time="2025-01-13T20:36:13.652131739Z" level=info msg="TearDown network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" successfully" Jan 13 20:36:13.652149 containerd[1484]: time="2025-01-13T20:36:13.652140205Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" returns successfully" Jan 13 20:36:13.652873 containerd[1484]: time="2025-01-13T20:36:13.652807577Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:13.653205 containerd[1484]: time="2025-01-13T20:36:13.652828116Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:13.653205 containerd[1484]: time="2025-01-13T20:36:13.652849065Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:13.653205 containerd[1484]: time="2025-01-13T20:36:13.653126887Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:13.653205 containerd[1484]: time="2025-01-13T20:36:13.653137948Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:13.653419 containerd[1484]: time="2025-01-13T20:36:13.653257963Z" level=info msg="TearDown network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" successfully" Jan 13 20:36:13.653419 containerd[1484]: time="2025-01-13T20:36:13.653270316Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" returns successfully" Jan 13 20:36:13.653470 kubelet[2667]: I0113 20:36:13.653458 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378" Jan 13 20:36:13.653591 containerd[1484]: time="2025-01-13T20:36:13.653555771Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:13.653591 containerd[1484]: time="2025-01-13T20:36:13.653576189Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:13.653794 kubelet[2667]: E0113 20:36:13.653777 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:13.654015 containerd[1484]: time="2025-01-13T20:36:13.653997951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:6,}" Jan 13 20:36:13.654785 containerd[1484]: time="2025-01-13T20:36:13.654008110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:6,}" Jan 13 20:36:13.655038 containerd[1484]: time="2025-01-13T20:36:13.654083141Z" level=info msg="StopPodSandbox for \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\"" Jan 13 20:36:13.655296 containerd[1484]: time="2025-01-13T20:36:13.655156706Z" level=info msg="Ensure that sandbox ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378 in task-service has been cleanup successfully" Jan 13 20:36:13.655438 containerd[1484]: time="2025-01-13T20:36:13.654095464Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:13.655558 containerd[1484]: time="2025-01-13T20:36:13.655484771Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:13.655558 containerd[1484]: time="2025-01-13T20:36:13.655495381Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:13.655799 containerd[1484]: time="2025-01-13T20:36:13.655766529Z" level=info msg="TearDown network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\" successfully" Jan 13 20:36:13.655799 containerd[1484]: time="2025-01-13T20:36:13.655784073Z" level=info msg="StopPodSandbox for \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\" returns successfully" Jan 13 20:36:13.655981 containerd[1484]: time="2025-01-13T20:36:13.655959743Z" level=info msg="StopPodSandbox for \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\"" Jan 13 20:36:13.656049 containerd[1484]: time="2025-01-13T20:36:13.656034533Z" level=info msg="TearDown network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" successfully" Jan 13 20:36:13.656049 containerd[1484]: time="2025-01-13T20:36:13.656044141Z" level=info msg="StopPodSandbox for \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" returns successfully" Jan 13 20:36:13.656190 containerd[1484]: time="2025-01-13T20:36:13.656165839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:6,}" Jan 13 20:36:13.656690 containerd[1484]: time="2025-01-13T20:36:13.656662191Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\"" Jan 13 20:36:13.656752 containerd[1484]: time="2025-01-13T20:36:13.656738333Z" level=info msg="TearDown network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" successfully" Jan 13 20:36:13.656776 containerd[1484]: time="2025-01-13T20:36:13.656750065Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" returns successfully" Jan 13 20:36:13.656932 containerd[1484]: time="2025-01-13T20:36:13.656912831Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" Jan 13 20:36:13.657057 containerd[1484]: time="2025-01-13T20:36:13.657038787Z" level=info msg="TearDown network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" successfully" Jan 13 20:36:13.657057 containerd[1484]: time="2025-01-13T20:36:13.657053585Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" returns successfully" Jan 13 20:36:13.657352 containerd[1484]: time="2025-01-13T20:36:13.657332198Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:13.657435 containerd[1484]: time="2025-01-13T20:36:13.657418349Z" level=info msg="TearDown network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" successfully" Jan 13 20:36:13.657462 containerd[1484]: time="2025-01-13T20:36:13.657433598Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" returns successfully" Jan 13 20:36:13.657832 containerd[1484]: time="2025-01-13T20:36:13.657793693Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:13.657933 containerd[1484]: time="2025-01-13T20:36:13.657912737Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:13.657933 containerd[1484]: time="2025-01-13T20:36:13.657928968Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:13.658343 containerd[1484]: time="2025-01-13T20:36:13.658319881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:6,}" Jan 13 20:36:13.658951 kubelet[2667]: E0113 20:36:13.658785 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:13.660373 kubelet[2667]: I0113 20:36:13.660345 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed" Jan 13 20:36:13.661114 containerd[1484]: time="2025-01-13T20:36:13.661074259Z" level=info msg="StopPodSandbox for \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\"" Jan 13 20:36:13.661444 containerd[1484]: time="2025-01-13T20:36:13.661227947Z" level=info msg="Ensure that sandbox b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed in task-service has been cleanup successfully" Jan 13 20:36:13.661494 containerd[1484]: time="2025-01-13T20:36:13.661450917Z" level=info msg="TearDown network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\" successfully" Jan 13 20:36:13.661494 containerd[1484]: time="2025-01-13T20:36:13.661462929Z" level=info msg="StopPodSandbox for \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\" returns successfully" Jan 13 20:36:13.661971 containerd[1484]: time="2025-01-13T20:36:13.661930235Z" level=info msg="StopPodSandbox for \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\"" Jan 13 20:36:13.662141 containerd[1484]: time="2025-01-13T20:36:13.662011849Z" level=info msg="TearDown network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" successfully" Jan 13 20:36:13.662141 containerd[1484]: time="2025-01-13T20:36:13.662025886Z" level=info msg="StopPodSandbox for \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" returns successfully" Jan 13 20:36:13.662617 containerd[1484]: time="2025-01-13T20:36:13.662589653Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\"" Jan 13 20:36:13.662772 containerd[1484]: time="2025-01-13T20:36:13.662729596Z" level=info msg="TearDown network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" successfully" Jan 13 20:36:13.662772 containerd[1484]: time="2025-01-13T20:36:13.662756396Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" returns successfully" Jan 13 20:36:13.663008 containerd[1484]: time="2025-01-13T20:36:13.662967031Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" Jan 13 20:36:13.663078 containerd[1484]: time="2025-01-13T20:36:13.663054615Z" level=info msg="TearDown network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" successfully" Jan 13 20:36:13.663078 containerd[1484]: time="2025-01-13T20:36:13.663069513Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" returns successfully" Jan 13 20:36:13.663741 kubelet[2667]: I0113 20:36:13.663396 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6" Jan 13 20:36:13.663787 containerd[1484]: time="2025-01-13T20:36:13.663566015Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:13.663885 containerd[1484]: time="2025-01-13T20:36:13.663845850Z" level=info msg="TearDown network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" successfully" Jan 13 20:36:13.663885 containerd[1484]: time="2025-01-13T20:36:13.663881838Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" returns successfully" Jan 13 20:36:13.663993 containerd[1484]: time="2025-01-13T20:36:13.663968230Z" level=info msg="StopPodSandbox for \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\"" Jan 13 20:36:13.664205 containerd[1484]: time="2025-01-13T20:36:13.664166011Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:13.664286 containerd[1484]: time="2025-01-13T20:36:13.664261901Z" level=info msg="Ensure that sandbox 0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6 in task-service has been cleanup successfully" Jan 13 20:36:13.664428 containerd[1484]: time="2025-01-13T20:36:13.664279424Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:13.664428 containerd[1484]: time="2025-01-13T20:36:13.664425378Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:13.664601 containerd[1484]: time="2025-01-13T20:36:13.664581290Z" level=info msg="TearDown network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\" successfully" Jan 13 20:36:13.664632 containerd[1484]: time="2025-01-13T20:36:13.664599725Z" level=info msg="StopPodSandbox for \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\" returns successfully" Jan 13 20:36:13.664888 containerd[1484]: time="2025-01-13T20:36:13.664855936Z" level=info msg="StopPodSandbox for \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\"" Jan 13 20:36:13.664954 containerd[1484]: time="2025-01-13T20:36:13.664935545Z" level=info msg="TearDown network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" successfully" Jan 13 20:36:13.664954 containerd[1484]: time="2025-01-13T20:36:13.664951275Z" level=info msg="StopPodSandbox for \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" returns successfully" Jan 13 20:36:13.665019 containerd[1484]: time="2025-01-13T20:36:13.664938080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:6,}" Jan 13 20:36:13.665186 containerd[1484]: time="2025-01-13T20:36:13.665165366Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\"" Jan 13 20:36:13.665343 containerd[1484]: time="2025-01-13T20:36:13.665324625Z" level=info msg="TearDown network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" successfully" Jan 13 20:36:13.665343 containerd[1484]: time="2025-01-13T20:36:13.665339242Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" returns successfully" Jan 13 20:36:13.665585 containerd[1484]: time="2025-01-13T20:36:13.665558244Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" Jan 13 20:36:13.665669 containerd[1484]: time="2025-01-13T20:36:13.665652290Z" level=info msg="TearDown network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" successfully" Jan 13 20:36:13.665669 containerd[1484]: time="2025-01-13T20:36:13.665666016Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" returns successfully" Jan 13 20:36:13.665936 containerd[1484]: time="2025-01-13T20:36:13.665915293Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:13.666011 containerd[1484]: time="2025-01-13T20:36:13.665997537Z" level=info msg="TearDown network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" successfully" Jan 13 20:36:13.666046 containerd[1484]: time="2025-01-13T20:36:13.666009871Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" returns successfully" Jan 13 20:36:13.666258 containerd[1484]: time="2025-01-13T20:36:13.666221138Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:13.666339 containerd[1484]: time="2025-01-13T20:36:13.666322027Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:13.666339 containerd[1484]: time="2025-01-13T20:36:13.666335823Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:13.666531 kubelet[2667]: E0113 20:36:13.666505 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:13.666763 containerd[1484]: time="2025-01-13T20:36:13.666735142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:6,}" Jan 13 20:36:14.124145 systemd[1]: run-netns-cni\x2d9aa57cfa\x2d171e\x2dbb76\x2decb3\x2d32998bfbd44d.mount: Deactivated successfully. Jan 13 20:36:14.124289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c-shm.mount: Deactivated successfully. Jan 13 20:36:14.124391 systemd[1]: run-netns-cni\x2dcf9e8a41\x2df637\x2de7ab\x2df7d1\x2d724ab01d5b70.mount: Deactivated successfully. Jan 13 20:36:14.124481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256-shm.mount: Deactivated successfully. Jan 13 20:36:14.124567 systemd[1]: run-netns-cni\x2d97d177a2\x2d2a48\x2dd84f\x2d6236\x2d9623137de810.mount: Deactivated successfully. Jan 13 20:36:14.124658 systemd[1]: run-netns-cni\x2d76398d54\x2d77f1\x2dc884\x2d8e4e\x2da55c81f7b946.mount: Deactivated successfully. Jan 13 20:36:14.124733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed-shm.mount: Deactivated successfully. Jan 13 20:36:14.124822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc-shm.mount: Deactivated successfully. Jan 13 20:36:14.124928 systemd[1]: run-netns-cni\x2d8597e4c6\x2d6868\x2d16c1\x2d8851\x2d6158304bc30a.mount: Deactivated successfully. Jan 13 20:36:14.125016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6-shm.mount: Deactivated successfully. Jan 13 20:36:14.555651 systemd-networkd[1393]: calic56c8c277e7: Link UP Jan 13 20:36:14.560889 systemd-networkd[1393]: calic56c8c277e7: Gained carrier Jan 13 20:36:14.567185 kubelet[2667]: I0113 20:36:14.567112 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5gwxk" podStartSLOduration=3.196395134 podStartE2EDuration="23.567087156s" podCreationTimestamp="2025-01-13 20:35:51 +0000 UTC" firstStartedPulling="2025-01-13 20:35:52.840434027 +0000 UTC m=+21.724256817" lastFinishedPulling="2025-01-13 20:36:13.211126049 +0000 UTC m=+42.094948839" observedRunningTime="2025-01-13 20:36:13.768970793 +0000 UTC m=+42.652793593" watchObservedRunningTime="2025-01-13 20:36:14.567087156 +0000 UTC m=+43.450909946" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.344 [INFO][5009] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.363 [INFO][5009] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0 calico-apiserver-5d65d67f4f- calico-apiserver bdcc8529-6e18-4c7a-bfcb-a3677ead2383 834 0 2025-01-13 20:35:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d65d67f4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d65d67f4f-fz2mt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic56c8c277e7 [] []}} ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.363 [INFO][5009] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.481 [INFO][5056] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" HandleID="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Workload="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.504 [INFO][5056] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" HandleID="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Workload="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ba900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d65d67f4f-fz2mt", "timestamp":"2025-01-13 20:36:14.48131717 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.504 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.504 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.504 [INFO][5056] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.509 [INFO][5056] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.521 [INFO][5056] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.526 [INFO][5056] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.528 [INFO][5056] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.530 [INFO][5056] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.530 [INFO][5056] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.531 [INFO][5056] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973 Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.537 [INFO][5056] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.541 [INFO][5056] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.541 [INFO][5056] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" host="localhost" Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.541 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:36:14.571593 containerd[1484]: 2025-01-13 20:36:14.541 [INFO][5056] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" HandleID="k8s-pod-network.50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Workload="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" Jan 13 20:36:14.572585 containerd[1484]: 2025-01-13 20:36:14.545 [INFO][5009] cni-plugin/k8s.go 386: Populated endpoint ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0", GenerateName:"calico-apiserver-5d65d67f4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdcc8529-6e18-4c7a-bfcb-a3677ead2383", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65d67f4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d65d67f4f-fz2mt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic56c8c277e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.572585 containerd[1484]: 2025-01-13 20:36:14.546 [INFO][5009] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" Jan 13 20:36:14.572585 containerd[1484]: 2025-01-13 20:36:14.546 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic56c8c277e7 ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" Jan 13 20:36:14.572585 containerd[1484]: 2025-01-13 20:36:14.555 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" Jan 13 20:36:14.572585 containerd[1484]: 2025-01-13 20:36:14.556 [INFO][5009] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0", GenerateName:"calico-apiserver-5d65d67f4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdcc8529-6e18-4c7a-bfcb-a3677ead2383", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65d67f4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973", Pod:"calico-apiserver-5d65d67f4f-fz2mt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic56c8c277e7", MAC:"d2:dc:22:7b:84:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.572585 containerd[1484]: 2025-01-13 20:36:14.568 [INFO][5009] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-fz2mt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--fz2mt-eth0" Jan 13 20:36:14.586895 systemd-networkd[1393]: caliaeee0b77f46: Link UP Jan 13 20:36:14.587804 systemd-networkd[1393]: caliaeee0b77f46: Gained carrier Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.285 [INFO][4965] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.337 [INFO][4965] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0 coredns-7db6d8ff4d- kube-system b6de0b00-2ea2-4589-a7b0-2c07a644bac8 831 0 2025-01-13 20:35:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-bgl8r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaeee0b77f46 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.337 [INFO][4965] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.487 [INFO][5045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" HandleID="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Workload="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.509 [INFO][5045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" HandleID="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Workload="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b2b80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-bgl8r", "timestamp":"2025-01-13 20:36:14.484353558 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.509 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.541 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.542 [INFO][5045] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.543 [INFO][5045] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.548 [INFO][5045] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.552 [INFO][5045] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.554 [INFO][5045] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.557 [INFO][5045] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.557 [INFO][5045] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.562 [INFO][5045] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50 Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.568 [INFO][5045] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.575 [INFO][5045] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.575 [INFO][5045] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" host="localhost" Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.575 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:36:14.606680 containerd[1484]: 2025-01-13 20:36:14.575 [INFO][5045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" HandleID="k8s-pod-network.3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Workload="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" Jan 13 20:36:14.607377 containerd[1484]: 2025-01-13 20:36:14.582 [INFO][4965] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b6de0b00-2ea2-4589-a7b0-2c07a644bac8", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-bgl8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeee0b77f46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.607377 containerd[1484]: 2025-01-13 20:36:14.583 [INFO][4965] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" Jan 13 20:36:14.607377 containerd[1484]: 2025-01-13 20:36:14.583 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeee0b77f46 ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" Jan 13 20:36:14.607377 containerd[1484]: 2025-01-13 20:36:14.588 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" Jan 13 20:36:14.607377 containerd[1484]: 2025-01-13 20:36:14.589 [INFO][4965] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b6de0b00-2ea2-4589-a7b0-2c07a644bac8", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50", Pod:"coredns-7db6d8ff4d-bgl8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeee0b77f46", MAC:"32:6f:90:0a:f8:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.607377 containerd[1484]: 2025-01-13 20:36:14.603 [INFO][4965] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bgl8r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bgl8r-eth0" Jan 13 20:36:14.633760 systemd-networkd[1393]: cali502d3059b04: Link UP Jan 13 20:36:14.635066 systemd-networkd[1393]: cali502d3059b04: Gained carrier Jan 13 20:36:14.645188 containerd[1484]: time="2025-01-13T20:36:14.644796137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:14.645188 containerd[1484]: time="2025-01-13T20:36:14.644896826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:14.645188 containerd[1484]: time="2025-01-13T20:36:14.644916202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.645188 containerd[1484]: time="2025-01-13T20:36:14.645032660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.317 [INFO][4985] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.334 [INFO][4985] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kbrd5-eth0 csi-node-driver- calico-system 6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d 660 0 2025-01-13 20:35:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kbrd5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali502d3059b04 [] []}} ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.334 [INFO][4985] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-eth0" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.480 [INFO][5043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" HandleID="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Workload="localhost-k8s-csi--node--driver--kbrd5-eth0" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.509 [INFO][5043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" HandleID="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Workload="localhost-k8s-csi--node--driver--kbrd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000393530), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kbrd5", "timestamp":"2025-01-13 20:36:14.480790572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.509 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.575 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.576 [INFO][5043] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.579 [INFO][5043] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.588 [INFO][5043] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.595 [INFO][5043] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.603 [INFO][5043] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.606 [INFO][5043] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.606 [INFO][5043] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.608 [INFO][5043] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.614 [INFO][5043] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.621 [INFO][5043] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.621 [INFO][5043] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" host="localhost" Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.621 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:36:14.654676 containerd[1484]: 2025-01-13 20:36:14.621 [INFO][5043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" HandleID="k8s-pod-network.825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Workload="localhost-k8s-csi--node--driver--kbrd5-eth0" Jan 13 20:36:14.655512 containerd[1484]: 2025-01-13 20:36:14.628 [INFO][4985] cni-plugin/k8s.go 386: Populated endpoint ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kbrd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kbrd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali502d3059b04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.655512 containerd[1484]: 2025-01-13 20:36:14.628 [INFO][4985] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-eth0" Jan 13 20:36:14.655512 containerd[1484]: 2025-01-13 20:36:14.628 [INFO][4985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali502d3059b04 ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-eth0" Jan 13 20:36:14.655512 containerd[1484]: 2025-01-13 20:36:14.634 [INFO][4985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-eth0" Jan 13 20:36:14.655512 containerd[1484]: 2025-01-13 20:36:14.635 [INFO][4985] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kbrd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae", Pod:"csi-node-driver-kbrd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali502d3059b04", MAC:"8a:79:d6:80:65:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.655512 containerd[1484]: 2025-01-13 20:36:14.650 [INFO][4985] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae" Namespace="calico-system" Pod="csi-node-driver-kbrd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbrd5-eth0" Jan 13 20:36:14.658081 containerd[1484]: time="2025-01-13T20:36:14.657642472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:14.662234 containerd[1484]: time="2025-01-13T20:36:14.660648562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:14.672076 containerd[1484]: time="2025-01-13T20:36:14.660691813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.672076 containerd[1484]: time="2025-01-13T20:36:14.671970225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.677463 systemd[1]: Started cri-containerd-3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50.scope - libcontainer container 3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50. Jan 13 20:36:14.692766 systemd-networkd[1393]: califf20be245cd: Link UP Jan 13 20:36:14.696671 systemd-networkd[1393]: califf20be245cd: Gained carrier Jan 13 20:36:14.697633 systemd[1]: Started cri-containerd-50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973.scope - libcontainer container 50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973. Jan 13 20:36:14.703826 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:36:14.715229 containerd[1484]: time="2025-01-13T20:36:14.715017027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:14.715229 containerd[1484]: time="2025-01-13T20:36:14.715118929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:14.715229 containerd[1484]: time="2025-01-13T20:36:14.715134658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.715436 containerd[1484]: time="2025-01-13T20:36:14.715286814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.357 [INFO][4983] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.396 [INFO][4983] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0 calico-kube-controllers-bc744d498- calico-system ac41f38c-55b9-4a77-8a90-e5737b17fd15 830 0 2025-01-13 20:35:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bc744d498 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bc744d498-xqhts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califf20be245cd [] []}} ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.396 [INFO][4983] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.499 [INFO][5065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" HandleID="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Workload="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.515 [INFO][5065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" HandleID="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Workload="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003618d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bc744d498-xqhts", "timestamp":"2025-01-13 20:36:14.499837641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.516 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.621 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.621 [INFO][5065] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.624 [INFO][5065] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.629 [INFO][5065] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.636 [INFO][5065] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.638 [INFO][5065] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.644 [INFO][5065] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.645 [INFO][5065] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.649 [INFO][5065] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229 Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.655 [INFO][5065] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.668 [INFO][5065] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.668 [INFO][5065] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" host="localhost" Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.668 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:36:14.719169 containerd[1484]: 2025-01-13 20:36:14.668 [INFO][5065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" HandleID="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Workload="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:14.719723 containerd[1484]: 2025-01-13 20:36:14.675 [INFO][4983] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0", GenerateName:"calico-kube-controllers-bc744d498-", Namespace:"calico-system", SelfLink:"", UID:"ac41f38c-55b9-4a77-8a90-e5737b17fd15", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc744d498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bc744d498-xqhts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf20be245cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.719723 containerd[1484]: 2025-01-13 20:36:14.675 [INFO][4983] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:14.719723 containerd[1484]: 2025-01-13 20:36:14.675 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf20be245cd ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:14.719723 containerd[1484]: 2025-01-13 20:36:14.695 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:14.719723 containerd[1484]: 2025-01-13 20:36:14.699 [INFO][4983] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0", GenerateName:"calico-kube-controllers-bc744d498-", Namespace:"calico-system", SelfLink:"", UID:"ac41f38c-55b9-4a77-8a90-e5737b17fd15", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc744d498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229", Pod:"calico-kube-controllers-bc744d498-xqhts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf20be245cd", MAC:"76:34:76:82:32:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.719723 containerd[1484]: 2025-01-13 20:36:14.714 [INFO][4983] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Namespace="calico-system" Pod="calico-kube-controllers-bc744d498-xqhts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:14.737548 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:36:14.752430 systemd[1]: Started cri-containerd-825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae.scope - libcontainer container 825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae. Jan 13 20:36:14.764460 containerd[1484]: time="2025-01-13T20:36:14.762833188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:14.764460 containerd[1484]: time="2025-01-13T20:36:14.763190609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:14.764460 containerd[1484]: time="2025-01-13T20:36:14.763432673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.764659 containerd[1484]: time="2025-01-13T20:36:14.764412291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.771898 systemd-networkd[1393]: calic46d546f538: Link UP Jan 13 20:36:14.772183 systemd-networkd[1393]: calic46d546f538: Gained carrier Jan 13 20:36:14.786536 containerd[1484]: time="2025-01-13T20:36:14.786484648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bgl8r,Uid:b6de0b00-2ea2-4589-a7b0-2c07a644bac8,Namespace:kube-system,Attempt:6,} returns sandbox id \"3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50\"" Jan 13 20:36:14.787554 kubelet[2667]: E0113 20:36:14.787528 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:14.788733 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:36:14.791477 containerd[1484]: time="2025-01-13T20:36:14.791447689Z" level=info msg="CreateContainer within sandbox \"3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.387 [INFO][5010] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.408 [INFO][5010] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0 coredns-7db6d8ff4d- kube-system e2b6f819-04b2-4600-8686-8ab182ac15ce 826 0 2025-01-13 20:35:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-j46wk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic46d546f538 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.408 [INFO][5010] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.499 [INFO][5074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" HandleID="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Workload="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.519 [INFO][5074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" HandleID="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Workload="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050fb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-j46wk", "timestamp":"2025-01-13 20:36:14.499324358 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.519 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.669 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.669 [INFO][5074] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.672 [INFO][5074] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.677 [INFO][5074] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.682 [INFO][5074] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.688 [INFO][5074] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.700 [INFO][5074] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.700 [INFO][5074] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.710 [INFO][5074] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.726 [INFO][5074] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.736 [INFO][5074] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.740 [INFO][5074] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" host="localhost" Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.741 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:36:14.805379 containerd[1484]: 2025-01-13 20:36:14.741 [INFO][5074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" HandleID="k8s-pod-network.4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Workload="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" Jan 13 20:36:14.806231 containerd[1484]: 2025-01-13 20:36:14.751 [INFO][5010] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e2b6f819-04b2-4600-8686-8ab182ac15ce", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-j46wk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic46d546f538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.806231 containerd[1484]: 2025-01-13 20:36:14.751 [INFO][5010] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" Jan 13 20:36:14.806231 containerd[1484]: 2025-01-13 20:36:14.751 [INFO][5010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic46d546f538 ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" Jan 13 20:36:14.806231 containerd[1484]: 2025-01-13 20:36:14.778 [INFO][5010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" Jan 13 20:36:14.806231 containerd[1484]: 2025-01-13 20:36:14.778 [INFO][5010] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e2b6f819-04b2-4600-8686-8ab182ac15ce", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc", Pod:"coredns-7db6d8ff4d-j46wk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic46d546f538", MAC:"32:94:bd:4b:7b:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.806231 containerd[1484]: 2025-01-13 20:36:14.796 [INFO][5010] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46wk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j46wk-eth0" Jan 13 20:36:14.815833 systemd[1]: Started cri-containerd-8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229.scope - libcontainer container 8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229. Jan 13 20:36:14.820059 containerd[1484]: time="2025-01-13T20:36:14.819961172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-fz2mt,Uid:bdcc8529-6e18-4c7a-bfcb-a3677ead2383,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973\"" Jan 13 20:36:14.824468 containerd[1484]: time="2025-01-13T20:36:14.824124914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:36:14.826836 containerd[1484]: time="2025-01-13T20:36:14.826763986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbrd5,Uid:6d5d7d4c-88ab-4857-ac1f-0b6b5fd9a24d,Namespace:calico-system,Attempt:6,} returns sandbox id \"825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae\"" Jan 13 20:36:14.826995 systemd-networkd[1393]: cali21b5058a4ec: Link UP Jan 13 20:36:14.831048 systemd-networkd[1393]: cali21b5058a4ec: Gained carrier Jan 13 20:36:14.841663 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.399 [INFO][5000] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.444 [INFO][5000] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0 calico-apiserver-5d65d67f4f- calico-apiserver 93092177-1d32-4f13-a83c-5cb4c8aca67a 832 0 2025-01-13 20:35:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d65d67f4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d65d67f4f-qmkl8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21b5058a4ec [] []}} ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.444 [INFO][5000] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.514 [INFO][5081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" HandleID="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Workload="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.521 [INFO][5081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" HandleID="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Workload="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d65d67f4f-qmkl8", "timestamp":"2025-01-13 20:36:14.514110242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.521 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.740 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.740 [INFO][5081] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.746 [INFO][5081] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.751 [INFO][5081] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.756 [INFO][5081] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.760 [INFO][5081] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.763 [INFO][5081] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.763 [INFO][5081] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.765 [INFO][5081] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82 Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.773 [INFO][5081] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.782 [INFO][5081] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.782 [INFO][5081] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" host="localhost" Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.782 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:36:14.854074 containerd[1484]: 2025-01-13 20:36:14.782 [INFO][5081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" HandleID="k8s-pod-network.28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Workload="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" Jan 13 20:36:14.854763 containerd[1484]: 2025-01-13 20:36:14.805 [INFO][5000] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0", GenerateName:"calico-apiserver-5d65d67f4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"93092177-1d32-4f13-a83c-5cb4c8aca67a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65d67f4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d65d67f4f-qmkl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21b5058a4ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.854763 containerd[1484]: 2025-01-13 20:36:14.808 [INFO][5000] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" Jan 13 20:36:14.854763 containerd[1484]: 2025-01-13 20:36:14.808 [INFO][5000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21b5058a4ec ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" Jan 13 20:36:14.854763 containerd[1484]: 2025-01-13 20:36:14.832 [INFO][5000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" Jan 13 20:36:14.854763 containerd[1484]: 2025-01-13 20:36:14.835 [INFO][5000] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0", GenerateName:"calico-apiserver-5d65d67f4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"93092177-1d32-4f13-a83c-5cb4c8aca67a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65d67f4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82", Pod:"calico-apiserver-5d65d67f4f-qmkl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21b5058a4ec", MAC:"ce:de:62:11:be:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:36:14.854763 containerd[1484]: 2025-01-13 20:36:14.846 [INFO][5000] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82" Namespace="calico-apiserver" Pod="calico-apiserver-5d65d67f4f-qmkl8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d65d67f4f--qmkl8-eth0" Jan 13 20:36:14.854763 containerd[1484]: time="2025-01-13T20:36:14.853863084Z" level=info msg="CreateContainer within sandbox \"3a4fca53b2d28cf8f826717cbaf21d28dd0d2e264ca49a74bd13fb72c4c73e50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6f751a38fa388b86cfc8ac48f748a3f2c8d1abe96d9949f9ccbd780dffcadb2\"" Jan 13 20:36:14.856944 containerd[1484]: time="2025-01-13T20:36:14.856023157Z" level=info msg="StartContainer for \"f6f751a38fa388b86cfc8ac48f748a3f2c8d1abe96d9949f9ccbd780dffcadb2\"" Jan 13 20:36:14.870039 containerd[1484]: time="2025-01-13T20:36:14.869917889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:14.870224 containerd[1484]: time="2025-01-13T20:36:14.870061448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:14.870224 containerd[1484]: time="2025-01-13T20:36:14.870134385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.871587 containerd[1484]: time="2025-01-13T20:36:14.871463439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.892756 systemd[1]: Started cri-containerd-4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc.scope - libcontainer container 4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc. Jan 13 20:36:14.895787 containerd[1484]: time="2025-01-13T20:36:14.895579880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc744d498-xqhts,Uid:ac41f38c-55b9-4a77-8a90-e5737b17fd15,Namespace:calico-system,Attempt:6,} returns sandbox id \"8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229\"" Jan 13 20:36:14.897153 systemd[1]: Started cri-containerd-f6f751a38fa388b86cfc8ac48f748a3f2c8d1abe96d9949f9ccbd780dffcadb2.scope - libcontainer container f6f751a38fa388b86cfc8ac48f748a3f2c8d1abe96d9949f9ccbd780dffcadb2. Jan 13 20:36:14.900629 containerd[1484]: time="2025-01-13T20:36:14.900222251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:14.901018 containerd[1484]: time="2025-01-13T20:36:14.900690088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:14.902302 containerd[1484]: time="2025-01-13T20:36:14.902151481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.903521 containerd[1484]: time="2025-01-13T20:36:14.903183808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:14.912062 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:36:14.928702 systemd[1]: Started cri-containerd-28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82.scope - libcontainer container 28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82. Jan 13 20:36:14.942640 containerd[1484]: time="2025-01-13T20:36:14.942520388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46wk,Uid:e2b6f819-04b2-4600-8686-8ab182ac15ce,Namespace:kube-system,Attempt:6,} returns sandbox id \"4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc\"" Jan 13 20:36:14.944121 kubelet[2667]: E0113 20:36:14.944086 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:14.946748 containerd[1484]: time="2025-01-13T20:36:14.946719237Z" level=info msg="CreateContainer within sandbox \"4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:36:14.953845 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:36:14.961361 containerd[1484]: time="2025-01-13T20:36:14.961322588Z" level=info msg="StartContainer for \"f6f751a38fa388b86cfc8ac48f748a3f2c8d1abe96d9949f9ccbd780dffcadb2\" returns successfully" Jan 13 20:36:14.980435 containerd[1484]: time="2025-01-13T20:36:14.980358777Z" level=info msg="CreateContainer within sandbox \"4037ccb06c05c0db30e46768a713c85c57e067b701b913a86950c579e30e7abc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a19f36e2f61133dc931cfa57e92678cb09f38c345f2c2ad72861b3a63f260d81\"" Jan 13 20:36:14.981120 containerd[1484]: time="2025-01-13T20:36:14.981039815Z" level=info msg="StartContainer for \"a19f36e2f61133dc931cfa57e92678cb09f38c345f2c2ad72861b3a63f260d81\"" Jan 13 20:36:14.989922 containerd[1484]: time="2025-01-13T20:36:14.989879091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65d67f4f-qmkl8,Uid:93092177-1d32-4f13-a83c-5cb4c8aca67a,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82\"" Jan 13 20:36:15.012384 systemd[1]: Started cri-containerd-a19f36e2f61133dc931cfa57e92678cb09f38c345f2c2ad72861b3a63f260d81.scope - libcontainer container a19f36e2f61133dc931cfa57e92678cb09f38c345f2c2ad72861b3a63f260d81. Jan 13 20:36:15.052740 containerd[1484]: time="2025-01-13T20:36:15.052691125Z" level=info msg="StartContainer for \"a19f36e2f61133dc931cfa57e92678cb09f38c345f2c2ad72861b3a63f260d81\" returns successfully" Jan 13 20:36:15.687308 kernel: bpftool[5617]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:36:15.690732 kubelet[2667]: E0113 20:36:15.689959 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:15.696409 kubelet[2667]: E0113 20:36:15.696238 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:15.923493 systemd-networkd[1393]: vxlan.calico: Link UP Jan 13 20:36:15.923502 systemd-networkd[1393]: vxlan.calico: Gained carrier Jan 13 20:36:16.053645 systemd-networkd[1393]: calic56c8c277e7: Gained IPv6LL Jan 13 20:36:16.114385 systemd-networkd[1393]: caliaeee0b77f46: Gained IPv6LL Jan 13 20:36:16.216332 kubelet[2667]: I0113 20:36:16.216262 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bgl8r" podStartSLOduration=31.216224137 podStartE2EDuration="31.216224137s" podCreationTimestamp="2025-01-13 20:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:36:15.856009308 +0000 UTC m=+44.739832108" watchObservedRunningTime="2025-01-13 20:36:16.216224137 +0000 UTC m=+45.100046937" Jan 13 20:36:16.218324 kubelet[2667]: I0113 20:36:16.216646 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j46wk" podStartSLOduration=31.216641179 podStartE2EDuration="31.216641179s" podCreationTimestamp="2025-01-13 20:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:36:16.215846027 +0000 UTC m=+45.099668837" watchObservedRunningTime="2025-01-13 20:36:16.216641179 +0000 UTC m=+45.100463979" Jan 13 20:36:16.307424 systemd-networkd[1393]: cali502d3059b04: Gained IPv6LL Jan 13 20:36:16.690436 systemd-networkd[1393]: califf20be245cd: Gained IPv6LL Jan 13 20:36:16.698920 kubelet[2667]: E0113 20:36:16.698879 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:16.699363 kubelet[2667]: E0113 20:36:16.699144 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:16.754457 systemd-networkd[1393]: cali21b5058a4ec: Gained IPv6LL Jan 13 20:36:16.818419 systemd-networkd[1393]: calic46d546f538: Gained IPv6LL Jan 13 20:36:17.010441 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Jan 13 20:36:17.701292 kubelet[2667]: E0113 20:36:17.701172 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:18.650307 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:57160.service - OpenSSH per-connection server daemon (10.0.0.1:57160). Jan 13 20:36:18.726316 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 57160 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:18.728664 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:18.733504 systemd-logind[1468]: New session 13 of user core. Jan 13 20:36:18.741391 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:36:18.869240 sshd[5731]: Connection closed by 10.0.0.1 port 57160 Jan 13 20:36:18.869687 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:18.874091 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:57160.service: Deactivated successfully. Jan 13 20:36:18.876331 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:36:18.877016 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:36:18.877875 systemd-logind[1468]: Removed session 13. Jan 13 20:36:20.096522 containerd[1484]: time="2025-01-13T20:36:20.096458130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:20.097444 containerd[1484]: time="2025-01-13T20:36:20.097411739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 20:36:20.099142 containerd[1484]: time="2025-01-13T20:36:20.099100146Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:20.101400 containerd[1484]: time="2025-01-13T20:36:20.101369523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:20.101953 containerd[1484]: time="2025-01-13T20:36:20.101931658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.277748745s" Jan 13 20:36:20.102030 containerd[1484]: time="2025-01-13T20:36:20.101957947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:36:20.102992 containerd[1484]: time="2025-01-13T20:36:20.102877763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:36:20.104147 containerd[1484]: time="2025-01-13T20:36:20.104103953Z" level=info msg="CreateContainer within sandbox \"50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:36:20.125107 containerd[1484]: time="2025-01-13T20:36:20.125061748Z" level=info msg="CreateContainer within sandbox \"50b581bda72c90d591cc711884e308cb45e924f922312da5b457f4bae9955973\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1535fa89e4e9407aefbd053d6e252e0782c4d3ab3fb4e446bd71d97e61e09ec\"" Jan 13 20:36:20.125644 containerd[1484]: time="2025-01-13T20:36:20.125617912Z" level=info msg="StartContainer for \"f1535fa89e4e9407aefbd053d6e252e0782c4d3ab3fb4e446bd71d97e61e09ec\"" Jan 13 20:36:20.153462 systemd[1]: run-containerd-runc-k8s.io-f1535fa89e4e9407aefbd053d6e252e0782c4d3ab3fb4e446bd71d97e61e09ec-runc.3ECKFD.mount: Deactivated successfully. Jan 13 20:36:20.171448 systemd[1]: Started cri-containerd-f1535fa89e4e9407aefbd053d6e252e0782c4d3ab3fb4e446bd71d97e61e09ec.scope - libcontainer container f1535fa89e4e9407aefbd053d6e252e0782c4d3ab3fb4e446bd71d97e61e09ec. Jan 13 20:36:20.219668 containerd[1484]: time="2025-01-13T20:36:20.219558167Z" level=info msg="StartContainer for \"f1535fa89e4e9407aefbd053d6e252e0782c4d3ab3fb4e446bd71d97e61e09ec\" returns successfully" Jan 13 20:36:20.730850 kubelet[2667]: I0113 20:36:20.730753 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d65d67f4f-fz2mt" podStartSLOduration=24.450092935 podStartE2EDuration="29.730726102s" podCreationTimestamp="2025-01-13 20:35:51 +0000 UTC" firstStartedPulling="2025-01-13 20:36:14.822116556 +0000 UTC m=+43.705939346" lastFinishedPulling="2025-01-13 20:36:20.102749723 +0000 UTC m=+48.986572513" observedRunningTime="2025-01-13 20:36:20.729868674 +0000 UTC m=+49.613691474" watchObservedRunningTime="2025-01-13 20:36:20.730726102 +0000 UTC m=+49.614548902" Jan 13 20:36:21.382679 containerd[1484]: time="2025-01-13T20:36:21.382618475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:21.383535 containerd[1484]: time="2025-01-13T20:36:21.383495861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:36:21.385218 containerd[1484]: time="2025-01-13T20:36:21.385185941Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:21.387709 containerd[1484]: time="2025-01-13T20:36:21.387650043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:21.395222 containerd[1484]: time="2025-01-13T20:36:21.395166844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.292260478s" Jan 13 20:36:21.395222 containerd[1484]: time="2025-01-13T20:36:21.395213702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:36:21.396255 containerd[1484]: time="2025-01-13T20:36:21.396192158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:36:21.398149 containerd[1484]: time="2025-01-13T20:36:21.397845859Z" level=info msg="CreateContainer within sandbox \"825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:36:21.426144 containerd[1484]: time="2025-01-13T20:36:21.426092377Z" level=info msg="CreateContainer within sandbox \"825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9e31e63f4961e3c08f706908101f40831617142c09b31eaec8cbd9629d13dfb6\"" Jan 13 20:36:21.428409 containerd[1484]: time="2025-01-13T20:36:21.426725144Z" level=info msg="StartContainer for \"9e31e63f4961e3c08f706908101f40831617142c09b31eaec8cbd9629d13dfb6\"" Jan 13 20:36:21.466391 systemd[1]: Started cri-containerd-9e31e63f4961e3c08f706908101f40831617142c09b31eaec8cbd9629d13dfb6.scope - libcontainer container 9e31e63f4961e3c08f706908101f40831617142c09b31eaec8cbd9629d13dfb6. Jan 13 20:36:21.504588 containerd[1484]: time="2025-01-13T20:36:21.504527271Z" level=info msg="StartContainer for \"9e31e63f4961e3c08f706908101f40831617142c09b31eaec8cbd9629d13dfb6\" returns successfully" Jan 13 20:36:21.718886 kubelet[2667]: I0113 20:36:21.718761 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:36:22.295610 kubelet[2667]: I0113 20:36:22.295549 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:36:22.296470 kubelet[2667]: E0113 20:36:22.296433 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:22.897967 kubelet[2667]: I0113 20:36:22.897891 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:36:23.314482 containerd[1484]: time="2025-01-13T20:36:23.314415091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:23.315316 containerd[1484]: time="2025-01-13T20:36:23.315223608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:36:23.316432 containerd[1484]: time="2025-01-13T20:36:23.316365810Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:23.318497 containerd[1484]: time="2025-01-13T20:36:23.318459207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:23.318989 containerd[1484]: time="2025-01-13T20:36:23.318949316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.922712364s" Jan 13 20:36:23.318989 containerd[1484]: time="2025-01-13T20:36:23.318982578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:36:23.319971 containerd[1484]: time="2025-01-13T20:36:23.319944232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:36:23.329032 containerd[1484]: time="2025-01-13T20:36:23.328986153Z" level=info msg="CreateContainer within sandbox \"8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:36:23.345670 containerd[1484]: time="2025-01-13T20:36:23.345611138Z" level=info msg="CreateContainer within sandbox \"8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463\"" Jan 13 20:36:23.346459 containerd[1484]: time="2025-01-13T20:36:23.346385400Z" level=info msg="StartContainer for \"b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463\"" Jan 13 20:36:23.382564 systemd[1]: Started cri-containerd-b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463.scope - libcontainer container b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463. Jan 13 20:36:23.441385 containerd[1484]: time="2025-01-13T20:36:23.440799379Z" level=info msg="StartContainer for \"b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463\" returns successfully" Jan 13 20:36:23.771517 kubelet[2667]: I0113 20:36:23.771434 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bc744d498-xqhts" podStartSLOduration=24.348723994 podStartE2EDuration="32.771404607s" podCreationTimestamp="2025-01-13 20:35:51 +0000 UTC" firstStartedPulling="2025-01-13 20:36:14.897111144 +0000 UTC m=+43.780933934" lastFinishedPulling="2025-01-13 20:36:23.319791757 +0000 UTC m=+52.203614547" observedRunningTime="2025-01-13 20:36:23.771227084 +0000 UTC m=+52.655049884" watchObservedRunningTime="2025-01-13 20:36:23.771404607 +0000 UTC m=+52.655227397" Jan 13 20:36:23.885359 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:58554.service - OpenSSH per-connection server daemon (10.0.0.1:58554). Jan 13 20:36:23.939612 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 58554 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:23.941701 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:23.946301 systemd-logind[1468]: New session 14 of user core. Jan 13 20:36:23.952370 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:36:24.085725 containerd[1484]: time="2025-01-13T20:36:24.085575323Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:24.087829 sshd[5936]: Connection closed by 10.0.0.1 port 58554 Jan 13 20:36:24.088192 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:24.092785 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:58554.service: Deactivated successfully. Jan 13 20:36:24.095204 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:36:24.095879 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:36:24.096779 systemd-logind[1468]: Removed session 14. Jan 13 20:36:24.135649 containerd[1484]: time="2025-01-13T20:36:24.135561483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:36:24.137700 containerd[1484]: time="2025-01-13T20:36:24.137647125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 817.672004ms" Jan 13 20:36:24.137780 containerd[1484]: time="2025-01-13T20:36:24.137700786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:36:24.138897 containerd[1484]: time="2025-01-13T20:36:24.138866332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:36:24.141157 containerd[1484]: time="2025-01-13T20:36:24.141098509Z" level=info msg="CreateContainer within sandbox \"28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:36:25.367646 containerd[1484]: time="2025-01-13T20:36:25.367575011Z" level=info msg="CreateContainer within sandbox \"28241f3705d30721dd3a45ed6841e8d2fdc423d94602c69c6491ba14e08e1c82\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aed65d6d69ea7c3552fd7b60d244408e3ddeca827ca26acb72a4e877870cb716\"" Jan 13 20:36:25.368339 containerd[1484]: time="2025-01-13T20:36:25.368136647Z" level=info msg="StartContainer for \"aed65d6d69ea7c3552fd7b60d244408e3ddeca827ca26acb72a4e877870cb716\"" Jan 13 20:36:25.406424 systemd[1]: Started cri-containerd-aed65d6d69ea7c3552fd7b60d244408e3ddeca827ca26acb72a4e877870cb716.scope - libcontainer container aed65d6d69ea7c3552fd7b60d244408e3ddeca827ca26acb72a4e877870cb716. Jan 13 20:36:25.465054 containerd[1484]: time="2025-01-13T20:36:25.464594526Z" level=info msg="StartContainer for \"aed65d6d69ea7c3552fd7b60d244408e3ddeca827ca26acb72a4e877870cb716\" returns successfully" Jan 13 20:36:25.784356 kubelet[2667]: I0113 20:36:25.784277 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d65d67f4f-qmkl8" podStartSLOduration=25.63808204 podStartE2EDuration="34.784233581s" podCreationTimestamp="2025-01-13 20:35:51 +0000 UTC" firstStartedPulling="2025-01-13 20:36:14.992494318 +0000 UTC m=+43.876317108" lastFinishedPulling="2025-01-13 20:36:24.138645859 +0000 UTC m=+53.022468649" observedRunningTime="2025-01-13 20:36:25.783400373 +0000 UTC m=+54.667223173" watchObservedRunningTime="2025-01-13 20:36:25.784233581 +0000 UTC m=+54.668056371" Jan 13 20:36:26.735652 kubelet[2667]: I0113 20:36:26.735610 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:36:27.089100 containerd[1484]: time="2025-01-13T20:36:27.089030438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:27.090334 containerd[1484]: time="2025-01-13T20:36:27.090268004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:36:27.092273 containerd[1484]: time="2025-01-13T20:36:27.092228594Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:27.094913 containerd[1484]: time="2025-01-13T20:36:27.094869435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:27.095882 containerd[1484]: time="2025-01-13T20:36:27.095831570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.956830294s" Jan 13 20:36:27.095882 containerd[1484]: time="2025-01-13T20:36:27.095879082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:36:27.098704 containerd[1484]: time="2025-01-13T20:36:27.098661696Z" level=info msg="CreateContainer within sandbox \"825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:36:27.116524 containerd[1484]: time="2025-01-13T20:36:27.116474049Z" level=info msg="CreateContainer within sandbox \"825dd2861418bcc828a6e73b113a3ab03ea4c8f27e402e121d71b3887cd175ae\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a86997f313ef587dc6affb5d4a94b15944752cdf3c51c5269f030cc61d48495\"" Jan 13 20:36:27.117126 containerd[1484]: time="2025-01-13T20:36:27.117069878Z" level=info msg="StartContainer for \"1a86997f313ef587dc6affb5d4a94b15944752cdf3c51c5269f030cc61d48495\"" Jan 13 20:36:27.155680 systemd[1]: Started cri-containerd-1a86997f313ef587dc6affb5d4a94b15944752cdf3c51c5269f030cc61d48495.scope - libcontainer container 1a86997f313ef587dc6affb5d4a94b15944752cdf3c51c5269f030cc61d48495. Jan 13 20:36:27.194839 containerd[1484]: time="2025-01-13T20:36:27.194774412Z" level=info msg="StartContainer for \"1a86997f313ef587dc6affb5d4a94b15944752cdf3c51c5269f030cc61d48495\" returns successfully" Jan 13 20:36:27.352595 kubelet[2667]: I0113 20:36:27.352354 2667 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:36:27.352595 kubelet[2667]: I0113 20:36:27.352397 2667 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:36:27.752752 kubelet[2667]: I0113 20:36:27.752308 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kbrd5" podStartSLOduration=24.485739526 podStartE2EDuration="36.752285868s" podCreationTimestamp="2025-01-13 20:35:51 +0000 UTC" firstStartedPulling="2025-01-13 20:36:14.830221624 +0000 UTC m=+43.714044414" lastFinishedPulling="2025-01-13 20:36:27.096767966 +0000 UTC m=+55.980590756" observedRunningTime="2025-01-13 20:36:27.751917047 +0000 UTC m=+56.635739857" watchObservedRunningTime="2025-01-13 20:36:27.752285868 +0000 UTC m=+56.636108658" Jan 13 20:36:29.100815 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:58564.service - OpenSSH per-connection server daemon (10.0.0.1:58564). Jan 13 20:36:29.155539 sshd[6059]: Accepted publickey for core from 10.0.0.1 port 58564 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:29.157722 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:29.162411 systemd-logind[1468]: New session 15 of user core. Jan 13 20:36:29.169453 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:36:29.304278 sshd[6061]: Connection closed by 10.0.0.1 port 58564 Jan 13 20:36:29.307503 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:29.319398 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:58564.service: Deactivated successfully. Jan 13 20:36:29.324126 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:36:29.329541 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:36:29.339553 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:58566.service - OpenSSH per-connection server daemon (10.0.0.1:58566). Jan 13 20:36:29.340323 systemd-logind[1468]: Removed session 15. Jan 13 20:36:29.371646 sshd[6074]: Accepted publickey for core from 10.0.0.1 port 58566 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:29.373362 sshd-session[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:29.377724 systemd-logind[1468]: New session 16 of user core. Jan 13 20:36:29.393403 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:36:29.664530 sshd[6076]: Connection closed by 10.0.0.1 port 58566 Jan 13 20:36:29.665112 sshd-session[6074]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:29.673522 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:58566.service: Deactivated successfully. Jan 13 20:36:29.675571 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:36:29.677508 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:36:29.685702 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:58574.service - OpenSSH per-connection server daemon (10.0.0.1:58574). Jan 13 20:36:29.686904 systemd-logind[1468]: Removed session 16. Jan 13 20:36:29.721381 sshd[6086]: Accepted publickey for core from 10.0.0.1 port 58574 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:29.722999 sshd-session[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:29.727149 systemd-logind[1468]: New session 17 of user core. Jan 13 20:36:29.737384 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:36:31.264930 containerd[1484]: time="2025-01-13T20:36:31.264401758Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:31.264930 containerd[1484]: time="2025-01-13T20:36:31.264564150Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:31.264930 containerd[1484]: time="2025-01-13T20:36:31.264578407Z" level=info msg="StopPodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:31.264930 containerd[1484]: time="2025-01-13T20:36:31.264927298Z" level=info msg="RemovePodSandbox for \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:31.278502 containerd[1484]: time="2025-01-13T20:36:31.278412802Z" level=info msg="Forcibly stopping sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\"" Jan 13 20:36:31.278918 containerd[1484]: time="2025-01-13T20:36:31.278660038Z" level=info msg="TearDown network for sandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" successfully" Jan 13 20:36:31.352310 containerd[1484]: time="2025-01-13T20:36:31.352220201Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.352498 containerd[1484]: time="2025-01-13T20:36:31.352322978Z" level=info msg="RemovePodSandbox \"1182fd3e254f1df5a55a2dc9cfce2b7bd56b7d2ea3237be8e327323cbcbd209d\" returns successfully" Jan 13 20:36:31.352914 containerd[1484]: time="2025-01-13T20:36:31.352866763Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:31.353116 containerd[1484]: time="2025-01-13T20:36:31.353007374Z" level=info msg="TearDown network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" successfully" Jan 13 20:36:31.353116 containerd[1484]: time="2025-01-13T20:36:31.353066097Z" level=info msg="StopPodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" returns successfully" Jan 13 20:36:31.354539 containerd[1484]: time="2025-01-13T20:36:31.353401602Z" level=info msg="RemovePodSandbox for \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:31.354539 containerd[1484]: time="2025-01-13T20:36:31.353428814Z" level=info msg="Forcibly stopping sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\"" Jan 13 20:36:31.354539 containerd[1484]: time="2025-01-13T20:36:31.353508838Z" level=info msg="TearDown network for sandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" successfully" Jan 13 20:36:31.361872 containerd[1484]: time="2025-01-13T20:36:31.361820827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.362361 containerd[1484]: time="2025-01-13T20:36:31.361899908Z" level=info msg="RemovePodSandbox \"efd252c6c489da474ffc18651a2812980a36b82d6ea73662f2a26bba3d2065dd\" returns successfully" Jan 13 20:36:31.362578 containerd[1484]: time="2025-01-13T20:36:31.362556201Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" Jan 13 20:36:31.362700 containerd[1484]: time="2025-01-13T20:36:31.362677864Z" level=info msg="TearDown network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" successfully" Jan 13 20:36:31.362755 containerd[1484]: time="2025-01-13T20:36:31.362697331Z" level=info msg="StopPodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" returns successfully" Jan 13 20:36:31.363017 containerd[1484]: time="2025-01-13T20:36:31.362984985Z" level=info msg="RemovePodSandbox for \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" Jan 13 20:36:31.363085 containerd[1484]: time="2025-01-13T20:36:31.363016386Z" level=info msg="Forcibly stopping sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\"" Jan 13 20:36:31.363164 containerd[1484]: time="2025-01-13T20:36:31.363111468Z" level=info msg="TearDown network for sandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" successfully" Jan 13 20:36:31.369326 containerd[1484]: time="2025-01-13T20:36:31.369274125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.369490 containerd[1484]: time="2025-01-13T20:36:31.369356734Z" level=info msg="RemovePodSandbox \"ae5cbbaa2fe15303d750ad256a055aed27399ef7e7dd5f6edf78425c32cb319b\" returns successfully" Jan 13 20:36:31.370375 containerd[1484]: time="2025-01-13T20:36:31.369905639Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\"" Jan 13 20:36:31.370560 containerd[1484]: time="2025-01-13T20:36:31.370239271Z" level=info msg="TearDown network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" successfully" Jan 13 20:36:31.371134 containerd[1484]: time="2025-01-13T20:36:31.370733390Z" level=info msg="StopPodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" returns successfully" Jan 13 20:36:31.371995 containerd[1484]: time="2025-01-13T20:36:31.371845388Z" level=info msg="RemovePodSandbox for \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\"" Jan 13 20:36:31.371995 containerd[1484]: time="2025-01-13T20:36:31.371873221Z" level=info msg="Forcibly stopping sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\"" Jan 13 20:36:31.372327 containerd[1484]: time="2025-01-13T20:36:31.372122230Z" level=info msg="TearDown network for sandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" successfully" Jan 13 20:36:31.377077 containerd[1484]: time="2025-01-13T20:36:31.377011750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.377077 containerd[1484]: time="2025-01-13T20:36:31.377076604Z" level=info msg="RemovePodSandbox \"7aaf76d5cd895364dc622a6cc980496e9a6127835f1d9aca6bc49ba07bc2588f\" returns successfully" Jan 13 20:36:31.377547 containerd[1484]: time="2025-01-13T20:36:31.377520137Z" level=info msg="StopPodSandbox for \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\"" Jan 13 20:36:31.377663 containerd[1484]: time="2025-01-13T20:36:31.377636812Z" level=info msg="TearDown network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" successfully" Jan 13 20:36:31.377663 containerd[1484]: time="2025-01-13T20:36:31.377657511Z" level=info msg="StopPodSandbox for \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" returns successfully" Jan 13 20:36:31.377932 containerd[1484]: time="2025-01-13T20:36:31.377899787Z" level=info msg="RemovePodSandbox for \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\"" Jan 13 20:36:31.377998 containerd[1484]: time="2025-01-13T20:36:31.377935105Z" level=info msg="Forcibly stopping sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\"" Jan 13 20:36:31.378096 containerd[1484]: time="2025-01-13T20:36:31.378045687Z" level=info msg="TearDown network for sandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" successfully" Jan 13 20:36:31.383281 containerd[1484]: time="2025-01-13T20:36:31.383219624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.383411 containerd[1484]: time="2025-01-13T20:36:31.383304828Z" level=info msg="RemovePodSandbox \"e027066a7d6f7c3e3f7184ec4ea5f8bcc206bc55e9203dd14df169d78c28bbf4\" returns successfully" Jan 13 20:36:31.383890 containerd[1484]: time="2025-01-13T20:36:31.383859985Z" level=info msg="StopPodSandbox for \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\"" Jan 13 20:36:31.383988 containerd[1484]: time="2025-01-13T20:36:31.383968203Z" level=info msg="TearDown network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\" successfully" Jan 13 20:36:31.384032 containerd[1484]: time="2025-01-13T20:36:31.383986067Z" level=info msg="StopPodSandbox for \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\" returns successfully" Jan 13 20:36:31.384623 containerd[1484]: time="2025-01-13T20:36:31.384589788Z" level=info msg="RemovePodSandbox for \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\"" Jan 13 20:36:31.384623 containerd[1484]: time="2025-01-13T20:36:31.384619265Z" level=info msg="Forcibly stopping sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\"" Jan 13 20:36:31.384760 containerd[1484]: time="2025-01-13T20:36:31.384704779Z" level=info msg="TearDown network for sandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\" successfully" Jan 13 20:36:31.388827 containerd[1484]: time="2025-01-13T20:36:31.388761417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.388827 containerd[1484]: time="2025-01-13T20:36:31.388818427Z" level=info msg="RemovePodSandbox \"9cc67d304a491f00f21270239979706697fed8ccb5c07bba4753f8fa93321256\" returns successfully" Jan 13 20:36:31.389160 sshd[6088]: Connection closed by 10.0.0.1 port 58574 Jan 13 20:36:31.389755 containerd[1484]: time="2025-01-13T20:36:31.389199920Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:31.389755 containerd[1484]: time="2025-01-13T20:36:31.389330862Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:31.389755 containerd[1484]: time="2025-01-13T20:36:31.389345691Z" level=info msg="StopPodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:31.390401 sshd-session[6086]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:31.394947 containerd[1484]: time="2025-01-13T20:36:31.394905509Z" level=info msg="RemovePodSandbox for \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:31.395060 containerd[1484]: time="2025-01-13T20:36:31.394980794Z" level=info msg="Forcibly stopping sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\"" Jan 13 20:36:31.397206 containerd[1484]: time="2025-01-13T20:36:31.395147384Z" level=info msg="TearDown network for sandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" successfully" Jan 13 20:36:31.402413 containerd[1484]: time="2025-01-13T20:36:31.402030706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.402413 containerd[1484]: time="2025-01-13T20:36:31.402112624Z" level=info msg="RemovePodSandbox \"167e10f2e6bb6d8c7ae654e35d4235ad61240e329de507957bf3710ff7a54da8\" returns successfully" Jan 13 20:36:31.404417 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:48336.service - OpenSSH per-connection server daemon (10.0.0.1:48336). Jan 13 20:36:31.405087 containerd[1484]: time="2025-01-13T20:36:31.404444286Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:31.405087 containerd[1484]: time="2025-01-13T20:36:31.404599405Z" level=info msg="TearDown network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" successfully" Jan 13 20:36:31.405087 containerd[1484]: time="2025-01-13T20:36:31.404614864Z" level=info msg="StopPodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" returns successfully" Jan 13 20:36:31.406614 containerd[1484]: time="2025-01-13T20:36:31.405713626Z" level=info msg="RemovePodSandbox for \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:31.406614 containerd[1484]: time="2025-01-13T20:36:31.405740328Z" level=info msg="Forcibly stopping sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\"" Jan 13 20:36:31.406614 containerd[1484]: time="2025-01-13T20:36:31.405825542Z" level=info msg="TearDown network for sandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" successfully" Jan 13 20:36:31.407805 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:58574.service: Deactivated successfully. Jan 13 20:36:31.414335 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:36:31.423290 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:36:31.427525 systemd-logind[1468]: Removed session 17. Jan 13 20:36:31.432093 containerd[1484]: time="2025-01-13T20:36:31.431624918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.432093 containerd[1484]: time="2025-01-13T20:36:31.431697028Z" level=info msg="RemovePodSandbox \"8b85d3245eed3708acaef9eff1dd002fe7d5545d2cf8f48c6939951e02c63a38\" returns successfully" Jan 13 20:36:31.432229 containerd[1484]: time="2025-01-13T20:36:31.432203261Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" Jan 13 20:36:31.432599 containerd[1484]: time="2025-01-13T20:36:31.432346836Z" level=info msg="TearDown network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" successfully" Jan 13 20:36:31.432599 containerd[1484]: time="2025-01-13T20:36:31.432366724Z" level=info msg="StopPodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" returns successfully" Jan 13 20:36:31.432736 containerd[1484]: time="2025-01-13T20:36:31.432712299Z" level=info msg="RemovePodSandbox for \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" Jan 13 20:36:31.432736 containerd[1484]: time="2025-01-13T20:36:31.432737548Z" level=info msg="Forcibly stopping sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\"" Jan 13 20:36:31.433014 containerd[1484]: time="2025-01-13T20:36:31.432808814Z" level=info msg="TearDown network for sandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" successfully" Jan 13 20:36:31.438338 containerd[1484]: time="2025-01-13T20:36:31.438239534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.438443 containerd[1484]: time="2025-01-13T20:36:31.438370446Z" level=info msg="RemovePodSandbox \"e62f4f283d77c490a9549eacedb27fe06002de86a2aa263d54772093e13d09d3\" returns successfully" Jan 13 20:36:31.440212 containerd[1484]: time="2025-01-13T20:36:31.439320663Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\"" Jan 13 20:36:31.442231 containerd[1484]: time="2025-01-13T20:36:31.440382915Z" level=info msg="TearDown network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" successfully" Jan 13 20:36:31.442231 containerd[1484]: time="2025-01-13T20:36:31.440445135Z" level=info msg="StopPodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" returns successfully" Jan 13 20:36:31.442631 containerd[1484]: time="2025-01-13T20:36:31.442599085Z" level=info msg="RemovePodSandbox for \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\"" Jan 13 20:36:31.442679 containerd[1484]: time="2025-01-13T20:36:31.442636708Z" level=info msg="Forcibly stopping sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\"" Jan 13 20:36:31.442778 containerd[1484]: time="2025-01-13T20:36:31.442727322Z" level=info msg="TearDown network for sandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" successfully" Jan 13 20:36:31.454123 containerd[1484]: time="2025-01-13T20:36:31.454078424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.454281 containerd[1484]: time="2025-01-13T20:36:31.454145392Z" level=info msg="RemovePodSandbox \"c2c2ed3b6edf77956b6983f2e4994ccab3b623025d9be58c26e81bb577570ed2\" returns successfully" Jan 13 20:36:31.455303 containerd[1484]: time="2025-01-13T20:36:31.455102442Z" level=info msg="StopPodSandbox for \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\"" Jan 13 20:36:31.455303 containerd[1484]: time="2025-01-13T20:36:31.455202074Z" level=info msg="TearDown network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" successfully" Jan 13 20:36:31.455303 containerd[1484]: time="2025-01-13T20:36:31.455212112Z" level=info msg="StopPodSandbox for \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" returns successfully" Jan 13 20:36:31.456952 containerd[1484]: time="2025-01-13T20:36:31.455704740Z" level=info msg="RemovePodSandbox for \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\"" Jan 13 20:36:31.456952 containerd[1484]: time="2025-01-13T20:36:31.455733926Z" level=info msg="Forcibly stopping sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\"" Jan 13 20:36:31.456952 containerd[1484]: time="2025-01-13T20:36:31.455816054Z" level=info msg="TearDown network for sandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" successfully" Jan 13 20:36:31.461591 containerd[1484]: time="2025-01-13T20:36:31.461371143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.461591 containerd[1484]: time="2025-01-13T20:36:31.461457629Z" level=info msg="RemovePodSandbox \"e909a8c359ee528a7d154139487d5b9d4a467bda1dda802c56eb579b6b62e877\" returns successfully" Jan 13 20:36:31.462087 containerd[1484]: time="2025-01-13T20:36:31.461844953Z" level=info msg="StopPodSandbox for \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\"" Jan 13 20:36:31.462087 containerd[1484]: time="2025-01-13T20:36:31.461952511Z" level=info msg="TearDown network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\" successfully" Jan 13 20:36:31.462087 containerd[1484]: time="2025-01-13T20:36:31.461968160Z" level=info msg="StopPodSandbox for \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\" returns successfully" Jan 13 20:36:31.462544 containerd[1484]: time="2025-01-13T20:36:31.462493190Z" level=info msg="RemovePodSandbox for \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\"" Jan 13 20:36:31.462621 containerd[1484]: time="2025-01-13T20:36:31.462544820Z" level=info msg="Forcibly stopping sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\"" Jan 13 20:36:31.462670 containerd[1484]: time="2025-01-13T20:36:31.462626296Z" level=info msg="TearDown network for sandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\" successfully" Jan 13 20:36:31.468751 containerd[1484]: time="2025-01-13T20:36:31.468625057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.468751 containerd[1484]: time="2025-01-13T20:36:31.468704991Z" level=info msg="RemovePodSandbox \"ac3af767623552648163589fd1eceb277e8f93352545b85bc62b2451da1d6378\" returns successfully" Jan 13 20:36:31.469220 containerd[1484]: time="2025-01-13T20:36:31.469086174Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:31.469220 containerd[1484]: time="2025-01-13T20:36:31.469168282Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:31.469220 containerd[1484]: time="2025-01-13T20:36:31.469178232Z" level=info msg="StopPodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:31.470074 containerd[1484]: time="2025-01-13T20:36:31.469577028Z" level=info msg="RemovePodSandbox for \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:31.470074 containerd[1484]: time="2025-01-13T20:36:31.469630831Z" level=info msg="Forcibly stopping sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\"" Jan 13 20:36:31.470074 containerd[1484]: time="2025-01-13T20:36:31.469755561Z" level=info msg="TearDown network for sandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" successfully" Jan 13 20:36:31.475398 containerd[1484]: time="2025-01-13T20:36:31.475308947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.475398 containerd[1484]: time="2025-01-13T20:36:31.475344245Z" level=info msg="RemovePodSandbox \"89257fc2b79e3133cc79f0c6c6494af635d860f2bacc4e76193829a7a27d8793\" returns successfully" Jan 13 20:36:31.477098 containerd[1484]: time="2025-01-13T20:36:31.475607751Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:31.477098 containerd[1484]: time="2025-01-13T20:36:31.475715038Z" level=info msg="TearDown network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" successfully" Jan 13 20:36:31.477098 containerd[1484]: time="2025-01-13T20:36:31.475729836Z" level=info msg="StopPodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" returns successfully" Jan 13 20:36:31.477098 containerd[1484]: time="2025-01-13T20:36:31.476079940Z" level=info msg="RemovePodSandbox for \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:31.477098 containerd[1484]: time="2025-01-13T20:36:31.476121620Z" level=info msg="Forcibly stopping sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\"" Jan 13 20:36:31.477098 containerd[1484]: time="2025-01-13T20:36:31.476334579Z" level=info msg="TearDown network for sandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" successfully" Jan 13 20:36:31.477326 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 48336 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:31.478970 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:31.481024 containerd[1484]: time="2025-01-13T20:36:31.480601742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.481024 containerd[1484]: time="2025-01-13T20:36:31.480670034Z" level=info msg="RemovePodSandbox \"dc00605acdd69d16a2178d9eae12ea87357fc84dffd9d7538ae21dc13fe0cdef\" returns successfully" Jan 13 20:36:31.483574 containerd[1484]: time="2025-01-13T20:36:31.483527386Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" Jan 13 20:36:31.483722 containerd[1484]: time="2025-01-13T20:36:31.483666103Z" level=info msg="TearDown network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" successfully" Jan 13 20:36:31.483722 containerd[1484]: time="2025-01-13T20:36:31.483682715Z" level=info msg="StopPodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" returns successfully" Jan 13 20:36:31.484594 containerd[1484]: time="2025-01-13T20:36:31.484554270Z" level=info msg="RemovePodSandbox for \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" Jan 13 20:36:31.484594 containerd[1484]: time="2025-01-13T20:36:31.484583546Z" level=info msg="Forcibly stopping sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\"" Jan 13 20:36:31.484852 containerd[1484]: time="2025-01-13T20:36:31.484669341Z" level=info msg="TearDown network for sandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" successfully" Jan 13 20:36:31.487319 systemd-logind[1468]: New session 18 of user core. Jan 13 20:36:31.489531 containerd[1484]: time="2025-01-13T20:36:31.489471053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.489531 containerd[1484]: time="2025-01-13T20:36:31.489533713Z" level=info msg="RemovePodSandbox \"21cca4bef34b966cd1ee1a0f4a202264ad6012a7857cd8eb0c0031697ec2ed5c\" returns successfully" Jan 13 20:36:31.489834 containerd[1484]: time="2025-01-13T20:36:31.489809423Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\"" Jan 13 20:36:31.489925 containerd[1484]: time="2025-01-13T20:36:31.489902772Z" level=info msg="TearDown network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" successfully" Jan 13 20:36:31.489925 containerd[1484]: time="2025-01-13T20:36:31.489920657Z" level=info msg="StopPodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" returns successfully" Jan 13 20:36:31.490135 containerd[1484]: time="2025-01-13T20:36:31.490109430Z" level=info msg="RemovePodSandbox for \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\"" Jan 13 20:36:31.490195 containerd[1484]: time="2025-01-13T20:36:31.490134588Z" level=info msg="Forcibly stopping sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\"" Jan 13 20:36:31.490278 containerd[1484]: time="2025-01-13T20:36:31.490217988Z" level=info msg="TearDown network for sandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" successfully" Jan 13 20:36:31.492394 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:36:31.494359 containerd[1484]: time="2025-01-13T20:36:31.494330624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.494454 containerd[1484]: time="2025-01-13T20:36:31.494372785Z" level=info msg="RemovePodSandbox \"340072986fef6fb0aac65bea58a2b86bd63f22e713fab23a57caa90fe5651017\" returns successfully" Jan 13 20:36:31.494739 containerd[1484]: time="2025-01-13T20:36:31.494702780Z" level=info msg="StopPodSandbox for \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\"" Jan 13 20:36:31.494832 containerd[1484]: time="2025-01-13T20:36:31.494808433Z" level=info msg="TearDown network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" successfully" Jan 13 20:36:31.494863 containerd[1484]: time="2025-01-13T20:36:31.494829874Z" level=info msg="StopPodSandbox for \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" returns successfully" Jan 13 20:36:31.495081 containerd[1484]: time="2025-01-13T20:36:31.495057101Z" level=info msg="RemovePodSandbox for \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\"" Jan 13 20:36:31.495127 containerd[1484]: time="2025-01-13T20:36:31.495084323Z" level=info msg="Forcibly stopping sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\"" Jan 13 20:36:31.495220 containerd[1484]: time="2025-01-13T20:36:31.495180869Z" level=info msg="TearDown network for sandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" successfully" Jan 13 20:36:31.499594 containerd[1484]: time="2025-01-13T20:36:31.499559457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.499594 containerd[1484]: time="2025-01-13T20:36:31.499596046Z" level=info msg="RemovePodSandbox \"96f4369467966daf244a9c81f0ba28dd7cc547bf93a42cc1e69053b51832d4eb\" returns successfully" Jan 13 20:36:31.499830 containerd[1484]: time="2025-01-13T20:36:31.499798546Z" level=info msg="StopPodSandbox for \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\"" Jan 13 20:36:31.499914 containerd[1484]: time="2025-01-13T20:36:31.499884922Z" level=info msg="TearDown network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\" successfully" Jan 13 20:36:31.499914 containerd[1484]: time="2025-01-13T20:36:31.499903157Z" level=info msg="StopPodSandbox for \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\" returns successfully" Jan 13 20:36:31.500107 containerd[1484]: time="2025-01-13T20:36:31.500090507Z" level=info msg="RemovePodSandbox for \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\"" Jan 13 20:36:31.500143 containerd[1484]: time="2025-01-13T20:36:31.500108031Z" level=info msg="Forcibly stopping sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\"" Jan 13 20:36:31.500184 containerd[1484]: time="2025-01-13T20:36:31.500171603Z" level=info msg="TearDown network for sandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\" successfully" Jan 13 20:36:31.503635 containerd[1484]: time="2025-01-13T20:36:31.503610664Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.503707 containerd[1484]: time="2025-01-13T20:36:31.503643147Z" level=info msg="RemovePodSandbox \"b70575e09284f0e89e1c5ba4feac2b472d9469b036cab13f2383f60d575708ed\" returns successfully" Jan 13 20:36:31.504030 containerd[1484]: time="2025-01-13T20:36:31.503939006Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:31.504030 containerd[1484]: time="2025-01-13T20:36:31.504021404Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:31.504030 containerd[1484]: time="2025-01-13T20:36:31.504031043Z" level=info msg="StopPodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:31.504431 containerd[1484]: time="2025-01-13T20:36:31.504395853Z" level=info msg="RemovePodSandbox for \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:31.504431 containerd[1484]: time="2025-01-13T20:36:31.504417736Z" level=info msg="Forcibly stopping sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\"" Jan 13 20:36:31.504569 containerd[1484]: time="2025-01-13T20:36:31.504509202Z" level=info msg="TearDown network for sandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" successfully" Jan 13 20:36:31.508051 containerd[1484]: time="2025-01-13T20:36:31.508007487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.508051 containerd[1484]: time="2025-01-13T20:36:31.508038085Z" level=info msg="RemovePodSandbox \"4a55ba5acae6a12661cd1efccfe64f09a2b76babad615ab4241871f94c64b8c7\" returns successfully" Jan 13 20:36:31.508281 containerd[1484]: time="2025-01-13T20:36:31.508240966Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:31.508358 containerd[1484]: time="2025-01-13T20:36:31.508335528Z" level=info msg="TearDown network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" successfully" Jan 13 20:36:31.508358 containerd[1484]: time="2025-01-13T20:36:31.508349865Z" level=info msg="StopPodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" returns successfully" Jan 13 20:36:31.508577 containerd[1484]: time="2025-01-13T20:36:31.508544208Z" level=info msg="RemovePodSandbox for \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:31.508577 containerd[1484]: time="2025-01-13T20:36:31.508565469Z" level=info msg="Forcibly stopping sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\"" Jan 13 20:36:31.508986 containerd[1484]: time="2025-01-13T20:36:31.508628450Z" level=info msg="TearDown network for sandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" successfully" Jan 13 20:36:31.512072 containerd[1484]: time="2025-01-13T20:36:31.512031342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.512072 containerd[1484]: time="2025-01-13T20:36:31.512066781Z" level=info msg="RemovePodSandbox \"cc52c7fe5c183347eff3adfb1602b5cbddd41d04bf0c80d423631114a4b68912\" returns successfully" Jan 13 20:36:31.512279 containerd[1484]: time="2025-01-13T20:36:31.512233602Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" Jan 13 20:36:31.512614 containerd[1484]: time="2025-01-13T20:36:31.512325899Z" level=info msg="TearDown network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" successfully" Jan 13 20:36:31.512614 containerd[1484]: time="2025-01-13T20:36:31.512340347Z" level=info msg="StopPodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" returns successfully" Jan 13 20:36:31.512707 containerd[1484]: time="2025-01-13T20:36:31.512673327Z" level=info msg="RemovePodSandbox for \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" Jan 13 20:36:31.512707 containerd[1484]: time="2025-01-13T20:36:31.512690329Z" level=info msg="Forcibly stopping sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\"" Jan 13 20:36:31.512793 containerd[1484]: time="2025-01-13T20:36:31.512752198Z" level=info msg="TearDown network for sandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" successfully" Jan 13 20:36:31.516327 containerd[1484]: time="2025-01-13T20:36:31.516220476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.516327 containerd[1484]: time="2025-01-13T20:36:31.516268377Z" level=info msg="RemovePodSandbox \"398249e30b4a6bb8316a732e20f72c330246995fb0ca6811abbc0e07fe006c8c\" returns successfully" Jan 13 20:36:31.516498 containerd[1484]: time="2025-01-13T20:36:31.516462460Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\"" Jan 13 20:36:31.516599 containerd[1484]: time="2025-01-13T20:36:31.516569697Z" level=info msg="TearDown network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" successfully" Jan 13 20:36:31.516599 containerd[1484]: time="2025-01-13T20:36:31.516586890Z" level=info msg="StopPodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" returns successfully" Jan 13 20:36:31.517056 containerd[1484]: time="2025-01-13T20:36:31.517028148Z" level=info msg="RemovePodSandbox for \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\"" Jan 13 20:36:31.517206 containerd[1484]: time="2025-01-13T20:36:31.517188477Z" level=info msg="Forcibly stopping sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\"" Jan 13 20:36:31.517348 containerd[1484]: time="2025-01-13T20:36:31.517305712Z" level=info msg="TearDown network for sandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" successfully" Jan 13 20:36:31.521328 containerd[1484]: time="2025-01-13T20:36:31.521291244Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.521328 containerd[1484]: time="2025-01-13T20:36:31.521323776Z" level=info msg="RemovePodSandbox \"bf118afbcfdf0eb52f47f3bb541b458a665fdac9a1005b99293b71bdac0003c1\" returns successfully" Jan 13 20:36:31.521575 containerd[1484]: time="2025-01-13T20:36:31.521542677Z" level=info msg="StopPodSandbox for \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\"" Jan 13 20:36:31.521763 containerd[1484]: time="2025-01-13T20:36:31.521629093Z" level=info msg="TearDown network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" successfully" Jan 13 20:36:31.521763 containerd[1484]: time="2025-01-13T20:36:31.521643902Z" level=info msg="StopPodSandbox for \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" returns successfully" Jan 13 20:36:31.521905 containerd[1484]: time="2025-01-13T20:36:31.521851570Z" level=info msg="RemovePodSandbox for \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\"" Jan 13 20:36:31.521905 containerd[1484]: time="2025-01-13T20:36:31.521873213Z" level=info msg="Forcibly stopping sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\"" Jan 13 20:36:31.522015 containerd[1484]: time="2025-01-13T20:36:31.521936133Z" level=info msg="TearDown network for sandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" successfully" Jan 13 20:36:31.525760 containerd[1484]: time="2025-01-13T20:36:31.525708826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.525760 containerd[1484]: time="2025-01-13T20:36:31.525760265Z" level=info msg="RemovePodSandbox \"24663ac9e2ec1dbf8106bb41c3fc56b9a13c8f6261447cefbe5b9c4e8f437c79\" returns successfully" Jan 13 20:36:31.526028 containerd[1484]: time="2025-01-13T20:36:31.525993764Z" level=info msg="StopPodSandbox for \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\"" Jan 13 20:36:31.526140 containerd[1484]: time="2025-01-13T20:36:31.526093636Z" level=info msg="TearDown network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\" successfully" Jan 13 20:36:31.526140 containerd[1484]: time="2025-01-13T20:36:31.526117281Z" level=info msg="StopPodSandbox for \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\" returns successfully" Jan 13 20:36:31.526617 containerd[1484]: time="2025-01-13T20:36:31.526592776Z" level=info msg="RemovePodSandbox for \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\"" Jan 13 20:36:31.526833 containerd[1484]: time="2025-01-13T20:36:31.526618014Z" level=info msg="Forcibly stopping sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\"" Jan 13 20:36:31.526833 containerd[1484]: time="2025-01-13T20:36:31.526688058Z" level=info msg="TearDown network for sandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\" successfully" Jan 13 20:36:31.530585 containerd[1484]: time="2025-01-13T20:36:31.530547248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.530655 containerd[1484]: time="2025-01-13T20:36:31.530590621Z" level=info msg="RemovePodSandbox \"0d37891a1904296ccdea146ca7abf4314a97003be12338acc00a03207f05cdc6\" returns successfully" Jan 13 20:36:31.530947 containerd[1484]: time="2025-01-13T20:36:31.530922579Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:31.531007 containerd[1484]: time="2025-01-13T20:36:31.531000048Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:31.531050 containerd[1484]: time="2025-01-13T20:36:31.531008744Z" level=info msg="StopPodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:31.531394 containerd[1484]: time="2025-01-13T20:36:31.531353598Z" level=info msg="RemovePodSandbox for \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:31.531394 containerd[1484]: time="2025-01-13T20:36:31.531394356Z" level=info msg="Forcibly stopping sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\"" Jan 13 20:36:31.531592 containerd[1484]: time="2025-01-13T20:36:31.531533102Z" level=info msg="TearDown network for sandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" successfully" Jan 13 20:36:31.535450 containerd[1484]: time="2025-01-13T20:36:31.535399676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.535511 containerd[1484]: time="2025-01-13T20:36:31.535479349Z" level=info msg="RemovePodSandbox \"fc1ad71ff85a89159305970f4db4a00cdf6d78a10680830a997de6b1c720454b\" returns successfully" Jan 13 20:36:31.535774 containerd[1484]: time="2025-01-13T20:36:31.535741633Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:31.535833 containerd[1484]: time="2025-01-13T20:36:31.535826466Z" level=info msg="TearDown network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" successfully" Jan 13 20:36:31.535858 containerd[1484]: time="2025-01-13T20:36:31.535834873Z" level=info msg="StopPodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" returns successfully" Jan 13 20:36:31.536052 containerd[1484]: time="2025-01-13T20:36:31.536031460Z" level=info msg="RemovePodSandbox for \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:31.536052 containerd[1484]: time="2025-01-13T20:36:31.536052220Z" level=info msg="Forcibly stopping sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\"" Jan 13 20:36:31.536146 containerd[1484]: time="2025-01-13T20:36:31.536119920Z" level=info msg="TearDown network for sandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" successfully" Jan 13 20:36:31.539768 containerd[1484]: time="2025-01-13T20:36:31.539744989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.539826 containerd[1484]: time="2025-01-13T20:36:31.539781940Z" level=info msg="RemovePodSandbox \"0803ac6d9e0c427b0cd8c0d28064da38c6266f697e20812e17f153841891459a\" returns successfully" Jan 13 20:36:31.540077 containerd[1484]: time="2025-01-13T20:36:31.540058702Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" Jan 13 20:36:31.540216 containerd[1484]: time="2025-01-13T20:36:31.540194544Z" level=info msg="TearDown network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" successfully" Jan 13 20:36:31.540216 containerd[1484]: time="2025-01-13T20:36:31.540211305Z" level=info msg="StopPodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" returns successfully" Jan 13 20:36:31.540419 containerd[1484]: time="2025-01-13T20:36:31.540400449Z" level=info msg="RemovePodSandbox for \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" Jan 13 20:36:31.540419 containerd[1484]: time="2025-01-13T20:36:31.540416490Z" level=info msg="Forcibly stopping sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\"" Jan 13 20:36:31.540503 containerd[1484]: time="2025-01-13T20:36:31.540474412Z" level=info msg="TearDown network for sandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" successfully" Jan 13 20:36:31.544066 containerd[1484]: time="2025-01-13T20:36:31.544034035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.544115 containerd[1484]: time="2025-01-13T20:36:31.544068531Z" level=info msg="RemovePodSandbox \"e6ff49c20a7b078035d2de5803609e7e984160264197549fe29d2b10a7b527ef\" returns successfully" Jan 13 20:36:31.544342 containerd[1484]: time="2025-01-13T20:36:31.544313662Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\"" Jan 13 20:36:31.544423 containerd[1484]: time="2025-01-13T20:36:31.544396621Z" level=info msg="TearDown network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" successfully" Jan 13 20:36:31.544423 containerd[1484]: time="2025-01-13T20:36:31.544409997Z" level=info msg="StopPodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" returns successfully" Jan 13 20:36:31.544688 containerd[1484]: time="2025-01-13T20:36:31.544662743Z" level=info msg="RemovePodSandbox for \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\"" Jan 13 20:36:31.544688 containerd[1484]: time="2025-01-13T20:36:31.544684014Z" level=info msg="Forcibly stopping sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\"" Jan 13 20:36:31.544779 containerd[1484]: time="2025-01-13T20:36:31.544749219Z" level=info msg="TearDown network for sandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" successfully" Jan 13 20:36:31.548693 containerd[1484]: time="2025-01-13T20:36:31.548594211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.548693 containerd[1484]: time="2025-01-13T20:36:31.548660028Z" level=info msg="RemovePodSandbox \"6964670c137dcaf2af27ef246f07b125f4886204652deeb0318859405a48fcff\" returns successfully" Jan 13 20:36:31.549004 containerd[1484]: time="2025-01-13T20:36:31.548966186Z" level=info msg="StopPodSandbox for \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\"" Jan 13 20:36:31.549140 containerd[1484]: time="2025-01-13T20:36:31.549069845Z" level=info msg="TearDown network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" successfully" Jan 13 20:36:31.549140 containerd[1484]: time="2025-01-13T20:36:31.549087910Z" level=info msg="StopPodSandbox for \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" returns successfully" Jan 13 20:36:31.549457 containerd[1484]: time="2025-01-13T20:36:31.549430508Z" level=info msg="RemovePodSandbox for \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\"" Jan 13 20:36:31.549457 containerd[1484]: time="2025-01-13T20:36:31.549453803Z" level=info msg="Forcibly stopping sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\"" Jan 13 20:36:31.549571 containerd[1484]: time="2025-01-13T20:36:31.549530992Z" level=info msg="TearDown network for sandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" successfully" Jan 13 20:36:31.553409 containerd[1484]: time="2025-01-13T20:36:31.553318253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.553409 containerd[1484]: time="2025-01-13T20:36:31.553363389Z" level=info msg="RemovePodSandbox \"a420a762dc022f1ec9553dddc0ac6649704e7749e469b6d707604c72518a6e3b\" returns successfully" Jan 13 20:36:31.553674 containerd[1484]: time="2025-01-13T20:36:31.553601286Z" level=info msg="StopPodSandbox for \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\"" Jan 13 20:36:31.553753 containerd[1484]: time="2025-01-13T20:36:31.553682242Z" level=info msg="TearDown network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\" successfully" Jan 13 20:36:31.553753 containerd[1484]: time="2025-01-13T20:36:31.553690979Z" level=info msg="StopPodSandbox for \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\" returns successfully" Jan 13 20:36:31.553943 containerd[1484]: time="2025-01-13T20:36:31.553886275Z" level=info msg="RemovePodSandbox for \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\"" Jan 13 20:36:31.553943 containerd[1484]: time="2025-01-13T20:36:31.553911403Z" level=info msg="Forcibly stopping sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\"" Jan 13 20:36:31.554040 containerd[1484]: time="2025-01-13T20:36:31.553995254Z" level=info msg="TearDown network for sandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\" successfully" Jan 13 20:36:31.557669 containerd[1484]: time="2025-01-13T20:36:31.557641553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.557787 containerd[1484]: time="2025-01-13T20:36:31.557763899Z" level=info msg="RemovePodSandbox \"5f0b6186f4c5ca483f79616872a568527ae481e637290690484b3173d464febc\" returns successfully" Jan 13 20:36:31.558108 containerd[1484]: time="2025-01-13T20:36:31.558088023Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:31.558218 containerd[1484]: time="2025-01-13T20:36:31.558169538Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:31.558218 containerd[1484]: time="2025-01-13T20:36:31.558180961Z" level=info msg="StopPodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:31.558477 containerd[1484]: time="2025-01-13T20:36:31.558423717Z" level=info msg="RemovePodSandbox for \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:31.558477 containerd[1484]: time="2025-01-13T20:36:31.558472992Z" level=info msg="Forcibly stopping sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\"" Jan 13 20:36:31.558571 containerd[1484]: time="2025-01-13T20:36:31.558543508Z" level=info msg="TearDown network for sandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" successfully" Jan 13 20:36:31.562414 containerd[1484]: time="2025-01-13T20:36:31.562378800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.562474 containerd[1484]: time="2025-01-13T20:36:31.562429218Z" level=info msg="RemovePodSandbox \"b15847af556bf1e35719384b566f14b0049e02b6ba822225b7c2ed731ec40aec\" returns successfully" Jan 13 20:36:31.562895 containerd[1484]: time="2025-01-13T20:36:31.562849215Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:31.563011 containerd[1484]: time="2025-01-13T20:36:31.562991638Z" level=info msg="TearDown network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" successfully" Jan 13 20:36:31.563011 containerd[1484]: time="2025-01-13T20:36:31.563005686Z" level=info msg="StopPodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" returns successfully" Jan 13 20:36:31.563323 containerd[1484]: time="2025-01-13T20:36:31.563296956Z" level=info msg="RemovePodSandbox for \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:31.563323 containerd[1484]: time="2025-01-13T20:36:31.563321152Z" level=info msg="Forcibly stopping sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\"" Jan 13 20:36:31.563448 containerd[1484]: time="2025-01-13T20:36:31.563403971Z" level=info msg="TearDown network for sandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" successfully" Jan 13 20:36:31.567128 containerd[1484]: time="2025-01-13T20:36:31.567098333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.567169 containerd[1484]: time="2025-01-13T20:36:31.567130966Z" level=info msg="RemovePodSandbox \"8e5979fdd55c4d49234f0d20b2c69195b0e9ab69e779ffc1112ab47a6f63d689\" returns successfully" Jan 13 20:36:31.567529 containerd[1484]: time="2025-01-13T20:36:31.567369324Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" Jan 13 20:36:31.567529 containerd[1484]: time="2025-01-13T20:36:31.567453366Z" level=info msg="TearDown network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" successfully" Jan 13 20:36:31.567529 containerd[1484]: time="2025-01-13T20:36:31.567464407Z" level=info msg="StopPodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" returns successfully" Jan 13 20:36:31.567731 containerd[1484]: time="2025-01-13T20:36:31.567699469Z" level=info msg="RemovePodSandbox for \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" Jan 13 20:36:31.567731 containerd[1484]: time="2025-01-13T20:36:31.567717424Z" level=info msg="Forcibly stopping sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\"" Jan 13 20:36:31.567819 containerd[1484]: time="2025-01-13T20:36:31.567786186Z" level=info msg="TearDown network for sandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" successfully" Jan 13 20:36:31.571396 containerd[1484]: time="2025-01-13T20:36:31.571368252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.571504 containerd[1484]: time="2025-01-13T20:36:31.571411625Z" level=info msg="RemovePodSandbox \"8b787457de55e5cd73d2aa48cef4a779d3e603c3db81c3ce35cc8e131c2f35b1\" returns successfully" Jan 13 20:36:31.571780 containerd[1484]: time="2025-01-13T20:36:31.571634233Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\"" Jan 13 20:36:31.571780 containerd[1484]: time="2025-01-13T20:36:31.571720289Z" level=info msg="TearDown network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" successfully" Jan 13 20:36:31.571780 containerd[1484]: time="2025-01-13T20:36:31.571729466Z" level=info msg="StopPodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" returns successfully" Jan 13 20:36:31.572127 containerd[1484]: time="2025-01-13T20:36:31.572072095Z" level=info msg="RemovePodSandbox for \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\"" Jan 13 20:36:31.572127 containerd[1484]: time="2025-01-13T20:36:31.572093296Z" level=info msg="Forcibly stopping sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\"" Jan 13 20:36:31.572331 containerd[1484]: time="2025-01-13T20:36:31.572256199Z" level=info msg="TearDown network for sandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" successfully" Jan 13 20:36:31.575952 containerd[1484]: time="2025-01-13T20:36:31.575923560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.576059 containerd[1484]: time="2025-01-13T20:36:31.575959007Z" level=info msg="RemovePodSandbox \"dbbb31277b208f7efed4a6abc3eac9d6f432df577db388f1b3d99d632bdd913d\" returns successfully" Jan 13 20:36:31.576178 containerd[1484]: time="2025-01-13T20:36:31.576149553Z" level=info msg="StopPodSandbox for \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\"" Jan 13 20:36:31.576268 containerd[1484]: time="2025-01-13T20:36:31.576231551Z" level=info msg="TearDown network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" successfully" Jan 13 20:36:31.576268 containerd[1484]: time="2025-01-13T20:36:31.576260537Z" level=info msg="StopPodSandbox for \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" returns successfully" Jan 13 20:36:31.576480 containerd[1484]: time="2025-01-13T20:36:31.576455893Z" level=info msg="RemovePodSandbox for \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\"" Jan 13 20:36:31.576480 containerd[1484]: time="2025-01-13T20:36:31.576477294Z" level=info msg="Forcibly stopping sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\"" Jan 13 20:36:31.576560 containerd[1484]: time="2025-01-13T20:36:31.576546627Z" level=info msg="TearDown network for sandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" successfully" Jan 13 20:36:31.580088 containerd[1484]: time="2025-01-13T20:36:31.580060742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.580185 containerd[1484]: time="2025-01-13T20:36:31.580095660Z" level=info msg="RemovePodSandbox \"6b3f6317f199316e0f7ef6fe73ca18d9a6daf33db1230a7b24062029c42dbd1f\" returns successfully" Jan 13 20:36:31.580417 containerd[1484]: time="2025-01-13T20:36:31.580332044Z" level=info msg="StopPodSandbox for \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\"" Jan 13 20:36:31.580599 containerd[1484]: time="2025-01-13T20:36:31.580488104Z" level=info msg="TearDown network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\" successfully" Jan 13 20:36:31.580599 containerd[1484]: time="2025-01-13T20:36:31.580501680Z" level=info msg="StopPodSandbox for \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\" returns successfully" Jan 13 20:36:31.580709 containerd[1484]: time="2025-01-13T20:36:31.580679201Z" level=info msg="RemovePodSandbox for \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\"" Jan 13 20:36:31.580709 containerd[1484]: time="2025-01-13T20:36:31.580700211Z" level=info msg="Forcibly stopping sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\"" Jan 13 20:36:31.580815 containerd[1484]: time="2025-01-13T20:36:31.580769074Z" level=info msg="TearDown network for sandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\" successfully" Jan 13 20:36:31.584363 containerd[1484]: time="2025-01-13T20:36:31.584329549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:36:31.584433 containerd[1484]: time="2025-01-13T20:36:31.584397109Z" level=info msg="RemovePodSandbox \"48fd073b28466c4c317c1f2b18dcdb1c39bfc1c2d18c6177fca43656a6a1d11c\" returns successfully" Jan 13 20:36:31.748680 sshd[6111]: Connection closed by 10.0.0.1 port 48336 Jan 13 20:36:31.747207 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:31.758167 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:48336.service: Deactivated successfully. Jan 13 20:36:31.761135 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:36:31.764321 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:36:31.774564 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:48344.service - OpenSSH per-connection server daemon (10.0.0.1:48344). Jan 13 20:36:31.776707 systemd-logind[1468]: Removed session 18. Jan 13 20:36:31.809801 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 48344 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:31.811589 sshd-session[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:31.816223 systemd-logind[1468]: New session 19 of user core. Jan 13 20:36:31.824392 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:36:31.932008 sshd[6124]: Connection closed by 10.0.0.1 port 48344 Jan 13 20:36:31.932410 sshd-session[6122]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:31.936221 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:48344.service: Deactivated successfully. Jan 13 20:36:31.938346 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:36:31.938931 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:36:31.939871 systemd-logind[1468]: Removed session 19. Jan 13 20:36:35.321036 kubelet[2667]: I0113 20:36:35.320965 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:36:36.944648 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:48356.service - OpenSSH per-connection server daemon (10.0.0.1:48356). Jan 13 20:36:36.987625 sshd[6168]: Accepted publickey for core from 10.0.0.1 port 48356 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:36.989740 sshd-session[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:36.994619 systemd-logind[1468]: New session 20 of user core. Jan 13 20:36:37.005498 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:36:37.125449 sshd[6170]: Connection closed by 10.0.0.1 port 48356 Jan 13 20:36:37.125879 sshd-session[6168]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:37.130627 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:48356.service: Deactivated successfully. Jan 13 20:36:37.132955 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:36:37.133687 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:36:37.134682 systemd-logind[1468]: Removed session 20. Jan 13 20:36:42.138301 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:40022.service - OpenSSH per-connection server daemon (10.0.0.1:40022). Jan 13 20:36:42.183456 sshd[6182]: Accepted publickey for core from 10.0.0.1 port 40022 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:42.185360 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:42.190521 systemd-logind[1468]: New session 21 of user core. Jan 13 20:36:42.201452 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:36:42.323297 sshd[6184]: Connection closed by 10.0.0.1 port 40022 Jan 13 20:36:42.323750 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:42.328942 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:40022.service: Deactivated successfully. Jan 13 20:36:42.331347 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:36:42.332131 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:36:42.333157 systemd-logind[1468]: Removed session 21. Jan 13 20:36:44.273001 kubelet[2667]: E0113 20:36:44.272940 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:47.339357 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:40024.service - OpenSSH per-connection server daemon (10.0.0.1:40024). Jan 13 20:36:47.378016 sshd[6201]: Accepted publickey for core from 10.0.0.1 port 40024 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:47.379636 sshd-session[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:47.383802 systemd-logind[1468]: New session 22 of user core. Jan 13 20:36:47.392390 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:36:47.506803 sshd[6203]: Connection closed by 10.0.0.1 port 40024 Jan 13 20:36:47.507203 sshd-session[6201]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:47.511682 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:40024.service: Deactivated successfully. Jan 13 20:36:47.514117 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:36:47.514931 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:36:47.515808 systemd-logind[1468]: Removed session 22. Jan 13 20:36:52.288275 containerd[1484]: time="2025-01-13T20:36:52.287181795Z" level=info msg="StopContainer for \"43a79136bce0a6a74f5b811cf8700c71c292db51842d06802a46c5f084417bca\" with timeout 300 (s)" Jan 13 20:36:52.288275 containerd[1484]: time="2025-01-13T20:36:52.288004119Z" level=info msg="Stop container \"43a79136bce0a6a74f5b811cf8700c71c292db51842d06802a46c5f084417bca\" with signal terminated" Jan 13 20:36:52.451138 kubelet[2667]: E0113 20:36:52.451072 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:52.532540 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:43762.service - OpenSSH per-connection server daemon (10.0.0.1:43762). Jan 13 20:36:52.576611 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 43762 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:36:52.578994 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:52.589034 systemd-logind[1468]: New session 23 of user core. Jan 13 20:36:52.593773 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:36:52.611666 containerd[1484]: time="2025-01-13T20:36:52.611566247Z" level=info msg="StopContainer for \"b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463\" with timeout 30 (s)" Jan 13 20:36:52.612491 containerd[1484]: time="2025-01-13T20:36:52.612398992Z" level=info msg="Stop container \"b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463\" with signal terminated" Jan 13 20:36:52.626199 systemd[1]: cri-containerd-b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463.scope: Deactivated successfully. Jan 13 20:36:52.661965 containerd[1484]: time="2025-01-13T20:36:52.661897813Z" level=info msg="shim disconnected" id=b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463 namespace=k8s.io Jan 13 20:36:52.661965 containerd[1484]: time="2025-01-13T20:36:52.661955692Z" level=warning msg="cleaning up after shim disconnected" id=b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463 namespace=k8s.io Jan 13 20:36:52.661965 containerd[1484]: time="2025-01-13T20:36:52.661964039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:52.663960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463-rootfs.mount: Deactivated successfully. Jan 13 20:36:52.710811 containerd[1484]: time="2025-01-13T20:36:52.710745484Z" level=info msg="StopContainer for \"b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463\" returns successfully" Jan 13 20:36:52.711606 containerd[1484]: time="2025-01-13T20:36:52.711427362Z" level=info msg="StopPodSandbox for \"8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229\"" Jan 13 20:36:52.711606 containerd[1484]: time="2025-01-13T20:36:52.711462238Z" level=info msg="Container to stop \"b4c38840fe73bc22489d7aff9f65c6afaba16113fe9c3678582117434f3c6463\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:36:52.716466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229-shm.mount: Deactivated successfully. Jan 13 20:36:52.722571 systemd[1]: cri-containerd-8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229.scope: Deactivated successfully. Jan 13 20:36:52.759064 sshd[6249]: Connection closed by 10.0.0.1 port 43762 Jan 13 20:36:52.758824 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:52.759653 containerd[1484]: time="2025-01-13T20:36:52.758516098Z" level=info msg="shim disconnected" id=8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229 namespace=k8s.io Jan 13 20:36:52.759653 containerd[1484]: time="2025-01-13T20:36:52.759158891Z" level=warning msg="cleaning up after shim disconnected" id=8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229 namespace=k8s.io Jan 13 20:36:52.760455 containerd[1484]: time="2025-01-13T20:36:52.759171134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:52.761966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229-rootfs.mount: Deactivated successfully. Jan 13 20:36:52.764291 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:43762.service: Deactivated successfully. Jan 13 20:36:52.767294 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:36:52.770459 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:36:52.771846 systemd-logind[1468]: Removed session 23. Jan 13 20:36:52.810655 kubelet[2667]: I0113 20:36:52.810607 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Jan 13 20:36:52.843439 systemd-networkd[1393]: califf20be245cd: Link DOWN Jan 13 20:36:52.843450 systemd-networkd[1393]: califf20be245cd: Lost carrier Jan 13 20:36:52.919722 containerd[1484]: time="2025-01-13T20:36:52.919601018Z" level=info msg="StopContainer for \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\" with timeout 5 (s)" Jan 13 20:36:52.920067 containerd[1484]: time="2025-01-13T20:36:52.919854700Z" level=info msg="Stop container \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\" with signal terminated" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.842 [INFO][6337] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.842 [INFO][6337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" iface="eth0" netns="/var/run/netns/cni-fa0d1222-857b-69c2-7b5c-ebb118a7ff8c" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.842 [INFO][6337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" iface="eth0" netns="/var/run/netns/cni-fa0d1222-857b-69c2-7b5c-ebb118a7ff8c" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.855 [INFO][6337] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" after=13.285512ms iface="eth0" netns="/var/run/netns/cni-fa0d1222-857b-69c2-7b5c-ebb118a7ff8c" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.855 [INFO][6337] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.855 [INFO][6337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.882 [INFO][6367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" HandleID="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Workload="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.883 [INFO][6367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.883 [INFO][6367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.923 [INFO][6367] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" HandleID="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Workload="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.923 [INFO][6367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" HandleID="k8s-pod-network.8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Workload="localhost-k8s-calico--kube--controllers--bc744d498--xqhts-eth0" Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.925 [INFO][6367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:36:52.935716 containerd[1484]: 2025-01-13 20:36:52.930 [INFO][6337] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229" Jan 13 20:36:52.936416 containerd[1484]: time="2025-01-13T20:36:52.936234639Z" level=info msg="TearDown network for sandbox \"8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229\" successfully" Jan 13 20:36:52.936416 containerd[1484]: time="2025-01-13T20:36:52.936281639Z" level=info msg="StopPodSandbox for \"8ce33e78e72b84b201298aeac12335286f8c8eb126879e927c3b48ea7ea07229\" returns successfully" Jan 13 20:36:52.948635 systemd[1]: cri-containerd-6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8.scope: Deactivated successfully. Jan 13 20:36:52.948951 systemd[1]: cri-containerd-6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8.scope: Consumed 2.281s CPU time. Jan 13 20:36:52.972551 containerd[1484]: time="2025-01-13T20:36:52.972487793Z" level=info msg="shim disconnected" id=6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8 namespace=k8s.io Jan 13 20:36:52.972551 containerd[1484]: time="2025-01-13T20:36:52.972545353Z" level=warning msg="cleaning up after shim disconnected" id=6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8 namespace=k8s.io Jan 13 20:36:52.972551 containerd[1484]: time="2025-01-13T20:36:52.972556243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:52.999434 containerd[1484]: time="2025-01-13T20:36:52.999374549Z" level=info msg="StopContainer for \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\" returns successfully" Jan 13 20:36:52.999909 containerd[1484]: time="2025-01-13T20:36:52.999881213Z" level=info msg="StopPodSandbox for \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\"" Jan 13 20:36:52.999978 containerd[1484]: time="2025-01-13T20:36:52.999926189Z" level=info msg="Container to stop \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:36:52.999978 containerd[1484]: time="2025-01-13T20:36:52.999968520Z" level=info msg="Container to stop \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:36:53.000048 containerd[1484]: time="2025-01-13T20:36:52.999981033Z" level=info msg="Container to stop \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:36:53.007831 systemd[1]: cri-containerd-1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda.scope: Deactivated successfully. Jan 13 20:36:53.027701 containerd[1484]: time="2025-01-13T20:36:53.027634560Z" level=info msg="shim disconnected" id=1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda namespace=k8s.io Jan 13 20:36:53.027947 containerd[1484]: time="2025-01-13T20:36:53.027910015Z" level=warning msg="cleaning up after shim disconnected" id=1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda namespace=k8s.io Jan 13 20:36:53.027947 containerd[1484]: time="2025-01-13T20:36:53.027934020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:53.033453 kubelet[2667]: I0113 20:36:53.033409 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac41f38c-55b9-4a77-8a90-e5737b17fd15-tigera-ca-bundle\") pod \"ac41f38c-55b9-4a77-8a90-e5737b17fd15\" (UID: \"ac41f38c-55b9-4a77-8a90-e5737b17fd15\") " Jan 13 20:36:53.034123 kubelet[2667]: I0113 20:36:53.033705 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wkpn\" (UniqueName: \"kubernetes.io/projected/ac41f38c-55b9-4a77-8a90-e5737b17fd15-kube-api-access-2wkpn\") pod \"ac41f38c-55b9-4a77-8a90-e5737b17fd15\" (UID: \"ac41f38c-55b9-4a77-8a90-e5737b17fd15\") " Jan 13 20:36:53.037269 kubelet[2667]: I0113 20:36:53.037198 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac41f38c-55b9-4a77-8a90-e5737b17fd15-kube-api-access-2wkpn" (OuterVolumeSpecName: "kube-api-access-2wkpn") pod "ac41f38c-55b9-4a77-8a90-e5737b17fd15" (UID: "ac41f38c-55b9-4a77-8a90-e5737b17fd15"). InnerVolumeSpecName "kube-api-access-2wkpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:36:53.039365 kubelet[2667]: I0113 20:36:53.039341 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac41f38c-55b9-4a77-8a90-e5737b17fd15-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ac41f38c-55b9-4a77-8a90-e5737b17fd15" (UID: "ac41f38c-55b9-4a77-8a90-e5737b17fd15"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:36:53.053765 containerd[1484]: time="2025-01-13T20:36:53.053710547Z" level=info msg="TearDown network for sandbox \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" successfully" Jan 13 20:36:53.053765 containerd[1484]: time="2025-01-13T20:36:53.053755232Z" level=info msg="StopPodSandbox for \"1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda\" returns successfully" Jan 13 20:36:53.087491 kubelet[2667]: I0113 20:36:53.087434 2667 topology_manager.go:215] "Topology Admit Handler" podUID="eac427fd-59da-43f0-b512-3a55c92cfb8c" podNamespace="calico-system" podName="calico-node-8l6rs" Jan 13 20:36:53.087491 kubelet[2667]: E0113 20:36:53.087516 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" containerName="calico-node" Jan 13 20:36:53.087805 kubelet[2667]: E0113 20:36:53.087531 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" containerName="calico-kube-controllers" Jan 13 20:36:53.087805 kubelet[2667]: E0113 20:36:53.087542 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" containerName="flexvol-driver" Jan 13 20:36:53.087805 kubelet[2667]: E0113 20:36:53.087551 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" containerName="install-cni" Jan 13 20:36:53.091235 kubelet[2667]: I0113 20:36:53.091188 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" containerName="calico-kube-controllers" Jan 13 20:36:53.091235 kubelet[2667]: I0113 20:36:53.091228 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" containerName="calico-node" Jan 13 20:36:53.102969 systemd[1]: Created slice kubepods-besteffort-podeac427fd_59da_43f0_b512_3a55c92cfb8c.slice - libcontainer container kubepods-besteffort-podeac427fd_59da_43f0_b512_3a55c92cfb8c.slice. Jan 13 20:36:53.134001 kubelet[2667]: I0113 20:36:53.133937 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-net-dir\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.134001 kubelet[2667]: I0113 20:36:53.133986 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-bin-dir\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.134199 kubelet[2667]: I0113 20:36:53.134028 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-flexvol-driver-host\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.134199 kubelet[2667]: I0113 20:36:53.134047 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-xtables-lock\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.134199 kubelet[2667]: I0113 20:36:53.134062 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-policysync\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.134199 kubelet[2667]: I0113 20:36:53.134057 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.134199 kubelet[2667]: I0113 20:36:53.134081 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-tigera-ca-bundle\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.134908 kubelet[2667]: I0113 20:36:53.134135 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.134908 kubelet[2667]: I0113 20:36:53.134175 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.134908 kubelet[2667]: I0113 20:36:53.134183 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-run-calico\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.134908 kubelet[2667]: I0113 20:36:53.134200 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.134908 kubelet[2667]: I0113 20:36:53.134221 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbz5w\" (UniqueName: \"kubernetes.io/projected/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-kube-api-access-vbz5w\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.135080 kubelet[2667]: I0113 20:36:53.134285 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-lib-modules\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.135080 kubelet[2667]: I0113 20:36:53.134315 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-lib-calico\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.135080 kubelet[2667]: I0113 20:36:53.134373 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-node-certs\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.135080 kubelet[2667]: I0113 20:36:53.134398 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-log-dir\") pod \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\" (UID: \"ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf\") " Jan 13 20:36:53.135080 kubelet[2667]: I0113 20:36:53.134473 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eac427fd-59da-43f0-b512-3a55c92cfb8c-node-certs\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135080 kubelet[2667]: I0113 20:36:53.134501 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-policysync\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135280 kubelet[2667]: I0113 20:36:53.134224 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.135280 kubelet[2667]: I0113 20:36:53.134530 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eac427fd-59da-43f0-b512-3a55c92cfb8c-tigera-ca-bundle\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135280 kubelet[2667]: I0113 20:36:53.134557 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-xtables-lock\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135280 kubelet[2667]: I0113 20:36:53.134590 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-flexvol-driver-host\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135280 kubelet[2667]: I0113 20:36:53.134615 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-cni-net-dir\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135506 kubelet[2667]: I0113 20:36:53.134647 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-cni-log-dir\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135506 kubelet[2667]: I0113 20:36:53.134672 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xk6\" (UniqueName: \"kubernetes.io/projected/eac427fd-59da-43f0-b512-3a55c92cfb8c-kube-api-access-v4xk6\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135506 kubelet[2667]: I0113 20:36:53.134696 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-var-lib-calico\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135506 kubelet[2667]: I0113 20:36:53.134721 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-cni-bin-dir\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135506 kubelet[2667]: I0113 20:36:53.134751 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-var-run-calico\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134777 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eac427fd-59da-43f0-b512-3a55c92cfb8c-lib-modules\") pod \"calico-node-8l6rs\" (UID: \"eac427fd-59da-43f0-b512-3a55c92cfb8c\") " pod="calico-system/calico-node-8l6rs" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134826 2667 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134842 2667 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac41f38c-55b9-4a77-8a90-e5737b17fd15-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134855 2667 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134869 2667 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134880 2667 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134893 2667 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2wkpn\" (UniqueName: \"kubernetes.io/projected/ac41f38c-55b9-4a77-8a90-e5737b17fd15-kube-api-access-2wkpn\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.135670 kubelet[2667]: I0113 20:36:53.134906 2667 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.135920 kubelet[2667]: I0113 20:36:53.134238 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-policysync" (OuterVolumeSpecName: "policysync") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.135920 kubelet[2667]: I0113 20:36:53.134499 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.135920 kubelet[2667]: I0113 20:36:53.134550 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.135920 kubelet[2667]: I0113 20:36:53.135066 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:36:53.137478 kubelet[2667]: I0113 20:36:53.137439 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-kube-api-access-vbz5w" (OuterVolumeSpecName: "kube-api-access-vbz5w") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "kube-api-access-vbz5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:36:53.138660 kubelet[2667]: I0113 20:36:53.138630 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:36:53.139217 kubelet[2667]: I0113 20:36:53.139180 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-node-certs" (OuterVolumeSpecName: "node-certs") pod "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" (UID: "ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:36:53.235629 kubelet[2667]: I0113 20:36:53.235589 2667 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.235930 kubelet[2667]: I0113 20:36:53.235829 2667 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-policysync\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.235930 kubelet[2667]: I0113 20:36:53.235844 2667 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vbz5w\" (UniqueName: \"kubernetes.io/projected/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-kube-api-access-vbz5w\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.235930 kubelet[2667]: I0113 20:36:53.235856 2667 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.235930 kubelet[2667]: I0113 20:36:53.235864 2667 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.235930 kubelet[2667]: I0113 20:36:53.235873 2667 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.235930 kubelet[2667]: I0113 20:36:53.235881 2667 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf-node-certs\") on node \"localhost\" DevicePath \"\"" Jan 13 20:36:53.281875 systemd[1]: Removed slice kubepods-besteffort-podce1fad39_7ddb_4f27_8aaa_e7bd140c6cbf.slice - libcontainer container kubepods-besteffort-podce1fad39_7ddb_4f27_8aaa_e7bd140c6cbf.slice. Jan 13 20:36:53.281994 systemd[1]: kubepods-besteffort-podce1fad39_7ddb_4f27_8aaa_e7bd140c6cbf.slice: Consumed 2.948s CPU time. Jan 13 20:36:53.283171 systemd[1]: Removed slice kubepods-besteffort-podac41f38c_55b9_4a77_8a90_e5737b17fd15.slice - libcontainer container kubepods-besteffort-podac41f38c_55b9_4a77_8a90_e5737b17fd15.slice. Jan 13 20:36:53.314092 systemd[1]: var-lib-kubelet-pods-ac41f38c\x2d55b9\x2d4a77\x2d8a90\x2de5737b17fd15-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 13 20:36:53.314235 systemd[1]: run-netns-cni\x2dfa0d1222\x2d857b\x2d69c2\x2d7b5c\x2debb118a7ff8c.mount: Deactivated successfully. Jan 13 20:36:53.314384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8-rootfs.mount: Deactivated successfully. Jan 13 20:36:53.314511 systemd[1]: var-lib-kubelet-pods-ce1fad39\x2d7ddb\x2d4f27\x2d8aaa\x2de7bd140c6cbf-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 13 20:36:53.314650 systemd[1]: var-lib-kubelet-pods-ac41f38c\x2d55b9\x2d4a77\x2d8a90\x2de5737b17fd15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2wkpn.mount: Deactivated successfully. Jan 13 20:36:53.314779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda-rootfs.mount: Deactivated successfully. Jan 13 20:36:53.314907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b539bc52018d224a870a6b22a5fb3a3e07e8b8be6a996d1200d89f8c6ffcfda-shm.mount: Deactivated successfully. Jan 13 20:36:53.315014 systemd[1]: var-lib-kubelet-pods-ce1fad39\x2d7ddb\x2d4f27\x2d8aaa\x2de7bd140c6cbf-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 13 20:36:53.315127 systemd[1]: var-lib-kubelet-pods-ce1fad39\x2d7ddb\x2d4f27\x2d8aaa\x2de7bd140c6cbf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvbz5w.mount: Deactivated successfully. Jan 13 20:36:53.407871 kubelet[2667]: E0113 20:36:53.407703 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:53.408650 containerd[1484]: time="2025-01-13T20:36:53.408555930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8l6rs,Uid:eac427fd-59da-43f0-b512-3a55c92cfb8c,Namespace:calico-system,Attempt:0,}" Jan 13 20:36:53.458451 containerd[1484]: time="2025-01-13T20:36:53.458338688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:53.458451 containerd[1484]: time="2025-01-13T20:36:53.458390656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:53.458451 containerd[1484]: time="2025-01-13T20:36:53.458400955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:53.458731 containerd[1484]: time="2025-01-13T20:36:53.458477511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:53.488421 systemd[1]: Started cri-containerd-aa23008e30f6a55dd5951bde071629f6fd0f179fcf1de4ff06d5ab888d62a5dd.scope - libcontainer container aa23008e30f6a55dd5951bde071629f6fd0f179fcf1de4ff06d5ab888d62a5dd. Jan 13 20:36:53.515998 containerd[1484]: time="2025-01-13T20:36:53.515952948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8l6rs,Uid:eac427fd-59da-43f0-b512-3a55c92cfb8c,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa23008e30f6a55dd5951bde071629f6fd0f179fcf1de4ff06d5ab888d62a5dd\"" Jan 13 20:36:53.516972 kubelet[2667]: E0113 20:36:53.516945 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:53.518934 containerd[1484]: time="2025-01-13T20:36:53.518804901Z" level=info msg="CreateContainer within sandbox \"aa23008e30f6a55dd5951bde071629f6fd0f179fcf1de4ff06d5ab888d62a5dd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:36:53.705834 containerd[1484]: time="2025-01-13T20:36:53.705685968Z" level=info msg="CreateContainer within sandbox \"aa23008e30f6a55dd5951bde071629f6fd0f179fcf1de4ff06d5ab888d62a5dd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9\"" Jan 13 20:36:53.707332 containerd[1484]: time="2025-01-13T20:36:53.706401770Z" level=info msg="StartContainer for \"3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9\"" Jan 13 20:36:53.735457 systemd[1]: Started cri-containerd-3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9.scope - libcontainer container 3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9. Jan 13 20:36:53.799579 containerd[1484]: time="2025-01-13T20:36:53.799529884Z" level=info msg="StartContainer for \"3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9\" returns successfully" Jan 13 20:36:53.813627 kubelet[2667]: E0113 20:36:53.813563 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:53.815197 kubelet[2667]: I0113 20:36:53.815165 2667 scope.go:117] "RemoveContainer" containerID="6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8" Jan 13 20:36:53.816490 containerd[1484]: time="2025-01-13T20:36:53.816435071Z" level=info msg="RemoveContainer for \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\"" Jan 13 20:36:53.844553 containerd[1484]: time="2025-01-13T20:36:53.844388118Z" level=info msg="RemoveContainer for \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\" returns successfully" Jan 13 20:36:53.844735 kubelet[2667]: I0113 20:36:53.844700 2667 scope.go:117] "RemoveContainer" containerID="bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f" Jan 13 20:36:53.845929 containerd[1484]: time="2025-01-13T20:36:53.845904982Z" level=info msg="RemoveContainer for \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\"" Jan 13 20:36:53.867813 containerd[1484]: time="2025-01-13T20:36:53.867755624Z" level=info msg="RemoveContainer for \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\" returns successfully" Jan 13 20:36:53.868047 kubelet[2667]: I0113 20:36:53.868004 2667 scope.go:117] "RemoveContainer" containerID="34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756" Jan 13 20:36:53.869316 containerd[1484]: time="2025-01-13T20:36:53.869281766Z" level=info msg="RemoveContainer for \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\"" Jan 13 20:36:53.915439 systemd[1]: cri-containerd-3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9.scope: Deactivated successfully. Jan 13 20:36:53.920374 containerd[1484]: time="2025-01-13T20:36:53.918626832Z" level=info msg="RemoveContainer for \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\" returns successfully" Jan 13 20:36:53.920374 containerd[1484]: time="2025-01-13T20:36:53.919376578Z" level=error msg="ContainerStatus for \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\": not found" Jan 13 20:36:53.920374 containerd[1484]: time="2025-01-13T20:36:53.919972129Z" level=error msg="ContainerStatus for \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\": not found" Jan 13 20:36:53.920374 containerd[1484]: time="2025-01-13T20:36:53.920353656Z" level=error msg="ContainerStatus for \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\": not found" Jan 13 20:36:53.920763 kubelet[2667]: I0113 20:36:53.918924 2667 scope.go:117] "RemoveContainer" containerID="6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8" Jan 13 20:36:53.920763 kubelet[2667]: E0113 20:36:53.919575 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\": not found" containerID="6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8" Jan 13 20:36:53.920763 kubelet[2667]: I0113 20:36:53.919607 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8"} err="failed to get container status \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"6806f5450f209be9f653c753ab76e4a6deaa89c3311a262dbccb0606bcafa3b8\": not found" Jan 13 20:36:53.920763 kubelet[2667]: I0113 20:36:53.919706 2667 scope.go:117] "RemoveContainer" containerID="bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f" Jan 13 20:36:53.920763 kubelet[2667]: E0113 20:36:53.920123 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\": not found" containerID="bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f" Jan 13 20:36:53.920763 kubelet[2667]: I0113 20:36:53.920142 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f"} err="failed to get container status \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd09696a65a9d1d476b13550dd47b4eeec944458481d6df254f463ba2023ee1f\": not found" Jan 13 20:36:53.920763 kubelet[2667]: I0113 20:36:53.920155 2667 scope.go:117] "RemoveContainer" containerID="34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756" Jan 13 20:36:53.920949 kubelet[2667]: E0113 20:36:53.920489 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\": not found" containerID="34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756" Jan 13 20:36:53.920949 kubelet[2667]: I0113 20:36:53.920515 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756"} err="failed to get container status \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\": rpc error: code = NotFound desc = an error occurred when try to find container \"34df79a06255aa41d9f18e0ffb7a13f48ad0498345b42596f2d854fdc8e21756\": not found" Jan 13 20:36:54.090621 containerd[1484]: time="2025-01-13T20:36:54.090182458Z" level=info msg="shim disconnected" id=3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9 namespace=k8s.io Jan 13 20:36:54.090621 containerd[1484]: time="2025-01-13T20:36:54.090274272Z" level=warning msg="cleaning up after shim disconnected" id=3473cd6178bcbfafe04b9c7fa917aab4066eb57e88856ccebb5c11828f4ae9c9 namespace=k8s.io Jan 13 20:36:54.090621 containerd[1484]: time="2025-01-13T20:36:54.090288749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:54.817868 kubelet[2667]: E0113 20:36:54.817827 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:36:54.819837 containerd[1484]: time="2025-01-13T20:36:54.819803251Z" level=info msg="CreateContainer within sandbox \"aa23008e30f6a55dd5951bde071629f6fd0f179fcf1de4ff06d5ab888d62a5dd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:36:55.003272 containerd[1484]: time="2025-01-13T20:36:55.003174766Z" level=info msg="CreateContainer within sandbox \"aa23008e30f6a55dd5951bde071629f6fd0f179fcf1de4ff06d5ab888d62a5dd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"20aee54a55832b7847ceaa7280b903ef09c69fab0857e7451d64773270ba1028\"" Jan 13 20:36:55.003978 containerd[1484]: time="2025-01-13T20:36:55.003723308Z" level=info msg="StartContainer for \"20aee54a55832b7847ceaa7280b903ef09c69fab0857e7451d64773270ba1028\"" Jan 13 20:36:55.040447 systemd[1]: Started cri-containerd-20aee54a55832b7847ceaa7280b903ef09c69fab0857e7451d64773270ba1028.scope - libcontainer container 20aee54a55832b7847ceaa7280b903ef09c69fab0857e7451d64773270ba1028. Jan 13 20:36:55.094672 containerd[1484]: time="2025-01-13T20:36:55.094541545Z" level=info msg="StartContainer for \"20aee54a55832b7847ceaa7280b903ef09c69fab0857e7451d64773270ba1028\" returns successfully" Jan 13 20:36:55.277055 kubelet[2667]: I0113 20:36:55.276995 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac41f38c-55b9-4a77-8a90-e5737b17fd15" path="/var/lib/kubelet/pods/ac41f38c-55b9-4a77-8a90-e5737b17fd15/volumes" Jan 13 20:36:55.277870 kubelet[2667]: I0113 20:36:55.277845 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf" path="/var/lib/kubelet/pods/ce1fad39-7ddb-4f27-8aaa-e7bd140c6cbf/volumes"