Jan 29 16:24:52.902535 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:24:52.902557 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:52.902569 kernel: BIOS-provided physical RAM map: Jan 29 16:24:52.902575 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:24:52.902582 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:24:52.902588 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:24:52.902596 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 16:24:52.902603 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 16:24:52.902609 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:24:52.902618 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:24:52.902624 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:24:52.902631 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:24:52.902637 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:24:52.902644 kernel: NX (Execute Disable) protection: active Jan 29 16:24:52.902652 kernel: APIC: Static calls initialized Jan 29 16:24:52.902662 kernel: SMBIOS 2.8 present. Jan 29 16:24:52.902669 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 16:24:52.902676 kernel: Hypervisor detected: KVM Jan 29 16:24:52.902683 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:24:52.902690 kernel: kvm-clock: using sched offset of 2471260395 cycles Jan 29 16:24:52.902697 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:24:52.902714 kernel: tsc: Detected 2794.748 MHz processor Jan 29 16:24:52.902723 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:24:52.902730 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:24:52.902738 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 16:24:52.902748 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:24:52.902755 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:24:52.902771 kernel: Using GB pages for direct mapping Jan 29 16:24:52.902779 kernel: ACPI: Early table checksum verification disabled Jan 29 16:24:52.902787 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 16:24:52.902794 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:52.902801 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:52.902809 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:52.902816 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 16:24:52.902826 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:52.902833 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:52.902840 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:52.902848 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:52.902855 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 16:24:52.902862 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 16:24:52.902873 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 16:24:52.902883 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 16:24:52.902890 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 16:24:52.902898 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 16:24:52.902905 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 16:24:52.902912 kernel: No NUMA configuration found Jan 29 16:24:52.902919 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 16:24:52.902927 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 16:24:52.902937 kernel: Zone ranges: Jan 29 16:24:52.902944 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:24:52.902952 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 16:24:52.902959 kernel: Normal empty Jan 29 16:24:52.902966 kernel: Movable zone start for each node Jan 29 16:24:52.902974 kernel: Early memory node ranges Jan 29 16:24:52.902981 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:24:52.902988 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 16:24:52.902996 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 16:24:52.903005 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:24:52.903013 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:24:52.903020 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:24:52.903027 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:24:52.903035 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:24:52.903042 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:24:52.903049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:24:52.903057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:24:52.903064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:24:52.903074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:24:52.903081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:24:52.903089 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:24:52.903096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:24:52.903103 kernel: TSC deadline timer available Jan 29 16:24:52.903111 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 16:24:52.903119 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:24:52.903126 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 16:24:52.903133 kernel: kvm-guest: setup PV sched yield Jan 29 16:24:52.903140 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:24:52.903150 kernel: Booting paravirtualized kernel on KVM Jan 29 16:24:52.903158 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:24:52.903165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 16:24:52.903173 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 16:24:52.903180 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 16:24:52.903187 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 16:24:52.903195 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:24:52.903202 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:24:52.903211 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:52.903221 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:24:52.903228 kernel: random: crng init done Jan 29 16:24:52.903236 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:24:52.903243 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:24:52.903251 kernel: Fallback order for Node 0: 0 Jan 29 16:24:52.903258 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 16:24:52.903265 kernel: Policy zone: DMA32 Jan 29 16:24:52.903273 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:24:52.903283 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 138948K reserved, 0K cma-reserved) Jan 29 16:24:52.903290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 16:24:52.903298 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:24:52.903305 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:24:52.903313 kernel: Dynamic Preempt: voluntary Jan 29 16:24:52.903320 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:24:52.903328 kernel: rcu: RCU event tracing is enabled. Jan 29 16:24:52.903335 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 16:24:52.903343 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:24:52.903353 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:24:52.903361 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:24:52.903368 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:24:52.903376 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 16:24:52.903383 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 16:24:52.903391 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:24:52.903410 kernel: Console: colour VGA+ 80x25 Jan 29 16:24:52.903417 kernel: printk: console [ttyS0] enabled Jan 29 16:24:52.903425 kernel: ACPI: Core revision 20230628 Jan 29 16:24:52.903435 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:24:52.903443 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:24:52.903450 kernel: x2apic enabled Jan 29 16:24:52.903457 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:24:52.903465 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 16:24:52.903473 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 16:24:52.903481 kernel: kvm-guest: setup PV IPIs Jan 29 16:24:52.903500 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:24:52.903510 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:24:52.903518 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 16:24:52.903525 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:24:52.903533 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:24:52.903543 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:24:52.903551 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:24:52.903558 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:24:52.903566 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:24:52.903574 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:24:52.903584 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:24:52.903592 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:24:52.903599 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:24:52.903607 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:24:52.903615 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:24:52.903623 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:24:52.903631 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:24:52.903639 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:24:52.903649 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:24:52.903657 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:24:52.903665 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:24:52.903672 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:24:52.903680 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:24:52.903688 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:24:52.903695 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:24:52.903703 kernel: landlock: Up and running. Jan 29 16:24:52.903718 kernel: SELinux: Initializing. Jan 29 16:24:52.903729 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:24:52.903736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:24:52.903744 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:24:52.903752 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:24:52.903760 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:24:52.903768 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:24:52.903776 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:24:52.903783 kernel: ... version: 0 Jan 29 16:24:52.903791 kernel: ... bit width: 48 Jan 29 16:24:52.903801 kernel: ... generic registers: 6 Jan 29 16:24:52.903808 kernel: ... value mask: 0000ffffffffffff Jan 29 16:24:52.903816 kernel: ... max period: 00007fffffffffff Jan 29 16:24:52.903824 kernel: ... fixed-purpose events: 0 Jan 29 16:24:52.903831 kernel: ... event mask: 000000000000003f Jan 29 16:24:52.903839 kernel: signal: max sigframe size: 1776 Jan 29 16:24:52.903846 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:24:52.903854 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:24:52.903862 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:24:52.903872 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:24:52.903880 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 16:24:52.903887 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 16:24:52.903895 kernel: smpboot: Max logical packages: 1 Jan 29 16:24:52.903902 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 16:24:52.903910 kernel: devtmpfs: initialized Jan 29 16:24:52.903918 kernel: x86/mm: Memory block size: 128MB Jan 29 16:24:52.903925 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:24:52.903933 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 16:24:52.903943 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:24:52.903951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:24:52.903959 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:24:52.903966 kernel: audit: type=2000 audit(1738167891.582:1): state=initialized audit_enabled=0 res=1 Jan 29 16:24:52.903974 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:24:52.903982 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:24:52.903989 kernel: cpuidle: using governor menu Jan 29 16:24:52.903997 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:24:52.904005 kernel: dca service started, version 1.12.1 Jan 29 16:24:52.904015 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:24:52.904023 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 16:24:52.904030 kernel: PCI: Using configuration type 1 for base access Jan 29 16:24:52.904038 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:24:52.904046 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:24:52.904053 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:24:52.904061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:24:52.904069 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:24:52.904076 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:24:52.904086 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:24:52.904095 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:24:52.904122 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:24:52.904133 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:24:52.904143 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:24:52.904161 kernel: ACPI: Interpreter enabled Jan 29 16:24:52.904171 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:24:52.904181 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:24:52.904191 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:24:52.904206 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:24:52.904216 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:24:52.904226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:24:52.904462 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:24:52.904632 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:24:52.904807 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:24:52.904823 kernel: PCI host bridge to bus 0000:00 Jan 29 16:24:52.904993 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:24:52.905141 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:24:52.905288 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:24:52.905457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 16:24:52.905670 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:24:52.905816 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:24:52.905930 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:24:52.906080 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:24:52.906227 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 16:24:52.906353 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 16:24:52.906597 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 16:24:52.906730 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 16:24:52.906913 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:24:52.907077 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:24:52.907264 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 16:24:52.907451 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 16:24:52.907607 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 16:24:52.907769 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:24:52.907906 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:24:52.908033 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 16:24:52.908215 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 16:24:52.908382 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:24:52.908578 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 16:24:52.908752 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 16:24:52.908929 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 16:24:52.909088 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 16:24:52.909246 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:24:52.909379 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:24:52.909634 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:24:52.909811 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 16:24:52.909949 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 16:24:52.910098 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:24:52.910261 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:24:52.910275 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:24:52.910291 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:24:52.910302 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:24:52.910311 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:24:52.910319 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:24:52.910326 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:24:52.910334 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:24:52.910342 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:24:52.910350 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:24:52.910357 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:24:52.910368 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:24:52.910376 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:24:52.910383 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:24:52.910426 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:24:52.910435 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:24:52.910443 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:24:52.910450 kernel: iommu: Default domain type: Translated Jan 29 16:24:52.910458 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:24:52.910466 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:24:52.910477 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:24:52.910485 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:24:52.910492 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 16:24:52.910621 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:24:52.910761 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:24:52.910902 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:24:52.910915 kernel: vgaarb: loaded Jan 29 16:24:52.910924 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:24:52.910936 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:24:52.910944 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:24:52.910951 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:24:52.910959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:24:52.910967 kernel: pnp: PnP ACPI init Jan 29 16:24:52.911112 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:24:52.911129 kernel: pnp: PnP ACPI: found 6 devices Jan 29 16:24:52.911140 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:24:52.911152 kernel: NET: Registered PF_INET protocol family Jan 29 16:24:52.911161 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:24:52.911168 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:24:52.911177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:24:52.911184 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:24:52.911192 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:24:52.911201 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:24:52.911208 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:24:52.911216 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:24:52.911227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:24:52.911235 kernel: NET: Registered PF_XDP protocol family Jan 29 16:24:52.911382 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:24:52.911542 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:24:52.911683 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:24:52.911815 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 16:24:52.911951 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:24:52.912095 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:24:52.912114 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:24:52.912122 kernel: Initialise system trusted keyrings Jan 29 16:24:52.912130 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:24:52.912138 kernel: Key type asymmetric registered Jan 29 16:24:52.912146 kernel: Asymmetric key parser 'x509' registered Jan 29 16:24:52.912153 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:24:52.912161 kernel: io scheduler mq-deadline registered Jan 29 16:24:52.912169 kernel: io scheduler kyber registered Jan 29 16:24:52.912177 kernel: io scheduler bfq registered Jan 29 16:24:52.912185 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:24:52.912196 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:24:52.912204 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:24:52.912212 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 16:24:52.912219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:24:52.912227 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:24:52.912235 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:24:52.912243 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:24:52.912251 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:24:52.912411 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:24:52.912428 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:24:52.912561 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:24:52.912696 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:24:52 UTC (1738167892) Jan 29 16:24:52.912836 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:24:52.912851 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:24:52.912860 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:24:52.912868 kernel: Segment Routing with IPv6 Jan 29 16:24:52.912880 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:24:52.912888 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:24:52.912896 kernel: Key type dns_resolver registered Jan 29 16:24:52.912904 kernel: IPI shorthand broadcast: enabled Jan 29 16:24:52.912912 kernel: sched_clock: Marking stable (586004803, 118065267)->(767153467, -63083397) Jan 29 16:24:52.912920 kernel: registered taskstats version 1 Jan 29 16:24:52.912928 kernel: Loading compiled-in X.509 certificates Jan 29 16:24:52.912936 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:24:52.912944 kernel: Key type .fscrypt registered Jan 29 16:24:52.912951 kernel: Key type fscrypt-provisioning registered Jan 29 16:24:52.912962 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:24:52.912972 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:24:52.912995 kernel: ima: No architecture policies found Jan 29 16:24:52.913007 kernel: clk: Disabling unused clocks Jan 29 16:24:52.913018 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:24:52.913030 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:24:52.913042 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:24:52.913055 kernel: Run /init as init process Jan 29 16:24:52.913070 kernel: with arguments: Jan 29 16:24:52.913080 kernel: /init Jan 29 16:24:52.913092 kernel: with environment: Jan 29 16:24:52.913104 kernel: HOME=/ Jan 29 16:24:52.913115 kernel: TERM=linux Jan 29 16:24:52.913126 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:24:52.913139 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:24:52.913155 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:24:52.913172 systemd[1]: Detected virtualization kvm. Jan 29 16:24:52.913184 systemd[1]: Detected architecture x86-64. Jan 29 16:24:52.913196 systemd[1]: Running in initrd. Jan 29 16:24:52.913208 systemd[1]: No hostname configured, using default hostname. Jan 29 16:24:52.913221 systemd[1]: Hostname set to <localhost>. Jan 29 16:24:52.913233 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:24:52.913245 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:24:52.913257 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:52.913273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:52.913300 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:24:52.913316 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:24:52.913328 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:24:52.913341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:24:52.913357 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:24:52.913369 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:24:52.913380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:52.913431 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:52.913443 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:24:52.913455 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:24:52.913466 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:24:52.913474 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:24:52.913486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:24:52.913496 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:24:52.913504 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:24:52.913513 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:24:52.913521 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:52.913530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:52.913539 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:52.913547 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:24:52.913556 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:24:52.913567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:24:52.913575 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:24:52.913584 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:24:52.913593 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:24:52.913601 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:24:52.913610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:52.913618 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:24:52.913627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:52.913639 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:24:52.913674 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 16:24:52.913699 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:24:52.913718 systemd-journald[194]: Journal started Jan 29 16:24:52.913740 systemd-journald[194]: Runtime Journal (/run/log/journal/e6de786073224b53b3f4d9d148ad0358) is 6M, max 48.4M, 42.3M free. Jan 29 16:24:52.907380 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 16:24:52.942223 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:24:52.942255 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:24:52.942273 kernel: Bridge firewalling registered Jan 29 16:24:52.935359 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 16:24:52.942594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:52.946108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:52.956283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:52.957337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:24:52.962118 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:24:52.968972 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:24:52.972650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:52.976195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:24:52.979077 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:52.981667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:53.000631 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:24:53.004139 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:24:53.006700 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:53.014423 dracut-cmdline[228]: dracut-dracut-053 Jan 29 16:24:53.017777 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:53.043089 systemd-resolved[230]: Positive Trust Anchors: Jan 29 16:24:53.043107 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:24:53.043146 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:24:53.045673 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 29 16:24:53.046843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:24:53.053203 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:53.108459 kernel: SCSI subsystem initialized Jan 29 16:24:53.118434 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:24:53.130434 kernel: iscsi: registered transport (tcp) Jan 29 16:24:53.153763 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:24:53.153817 kernel: QLogic iSCSI HBA Driver Jan 29 16:24:53.201824 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:24:53.214638 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:24:53.243893 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:24:53.243969 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:24:53.244923 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:24:53.289450 kernel: raid6: avx2x4 gen() 19166 MB/s Jan 29 16:24:53.306448 kernel: raid6: avx2x2 gen() 23796 MB/s Jan 29 16:24:53.323572 kernel: raid6: avx2x1 gen() 25142 MB/s Jan 29 16:24:53.323641 kernel: raid6: using algorithm avx2x1 gen() 25142 MB/s Jan 29 16:24:53.341545 kernel: raid6: .... xor() 15562 MB/s, rmw enabled Jan 29 16:24:53.341614 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:24:53.365432 kernel: xor: automatically using best checksumming function avx Jan 29 16:24:53.528461 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:24:53.545565 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:24:53.560856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:53.578152 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 16:24:53.584003 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:53.599652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:24:53.616312 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jan 29 16:24:53.654716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:24:53.664633 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:24:53.727301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:53.739608 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:24:53.751059 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:24:53.753456 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:24:53.755140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:53.757658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:24:53.771644 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 16:24:53.807734 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:24:53.807755 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 16:24:53.807923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:24:53.807944 kernel: GPT:9289727 != 19775487 Jan 29 16:24:53.807955 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:24:53.807965 kernel: GPT:9289727 != 19775487 Jan 29 16:24:53.807975 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:24:53.807986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:24:53.807997 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:24:53.808007 kernel: libata version 3.00 loaded. Jan 29 16:24:53.808018 kernel: AES CTR mode by8 optimization enabled Jan 29 16:24:53.770592 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:24:53.781414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:24:53.806382 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:24:53.806519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:53.811116 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:53.827452 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:24:53.851215 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:24:53.851239 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:24:53.851469 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:24:53.851655 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (457) Jan 29 16:24:53.851672 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Jan 29 16:24:53.851697 kernel: scsi host0: ahci Jan 29 16:24:53.851906 kernel: scsi host1: ahci Jan 29 16:24:53.852101 kernel: scsi host2: ahci Jan 29 16:24:53.852276 kernel: scsi host3: ahci Jan 29 16:24:53.852501 kernel: scsi host4: ahci Jan 29 16:24:53.852698 kernel: scsi host5: ahci Jan 29 16:24:53.852909 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 16:24:53.852926 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 16:24:53.852941 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 16:24:53.852955 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 16:24:53.852970 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 16:24:53.852988 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 16:24:53.813975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:24:53.814141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:53.815609 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:53.829663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:53.859773 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:24:53.887964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:53.913291 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:24:53.922172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:24:53.923567 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:24:53.936897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:24:53.949718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:24:53.951948 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:53.962871 disk-uuid[556]: Primary Header is updated. Jan 29 16:24:53.962871 disk-uuid[556]: Secondary Entries is updated. Jan 29 16:24:53.962871 disk-uuid[556]: Secondary Header is updated. Jan 29 16:24:53.967427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:24:53.972448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:24:53.980237 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:54.161706 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:54.161783 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:54.161795 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:54.161805 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:54.163444 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:24:54.163474 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:24:54.164066 kernel: ata3.00: applying bridge limits Jan 29 16:24:54.165416 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:54.165438 kernel: ata3.00: configured for UDMA/100 Jan 29 16:24:54.166433 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:24:54.212444 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:24:54.226179 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:24:54.226200 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:24:54.973432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:24:54.974185 disk-uuid[559]: The operation has completed successfully. Jan 29 16:24:55.012976 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:24:55.013122 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:24:55.057667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:24:55.061951 sh[594]: Success Jan 29 16:24:55.077423 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:24:55.113915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:24:55.125506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:24:55.127870 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:24:55.141568 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:24:55.141630 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:55.141669 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:24:55.143944 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:24:55.143972 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:24:55.150358 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:24:55.153289 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:24:55.169591 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:24:55.172482 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:24:55.184361 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:55.184454 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:55.184470 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:24:55.187429 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:24:55.196601 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:24:55.199241 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:55.208210 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:24:55.217569 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:24:55.276615 ignition[695]: Ignition 2.20.0 Jan 29 16:24:55.276630 ignition[695]: Stage: fetch-offline Jan 29 16:24:55.276685 ignition[695]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:55.276696 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:55.276809 ignition[695]: parsed url from cmdline: "" Jan 29 16:24:55.276815 ignition[695]: no config URL provided Jan 29 16:24:55.276821 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:24:55.276832 ignition[695]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:24:55.276862 ignition[695]: op(1): [started] loading QEMU firmware config module Jan 29 16:24:55.276869 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 16:24:55.284036 ignition[695]: op(1): [finished] loading QEMU firmware config module Jan 29 16:24:55.288511 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:24:55.301596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:24:55.328998 systemd-networkd[784]: lo: Link UP Jan 29 16:24:55.329009 systemd-networkd[784]: lo: Gained carrier Jan 29 16:24:55.332084 systemd-networkd[784]: Enumeration completed Jan 29 16:24:55.332528 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:24:55.332520 ignition[695]: parsing config with SHA512: ead02f70ab4bcbfa6ec45f913e626454180c935f558152272531a1d499622ca859bc02d5729ed0690233ae4f11edd8eca5a6c1dc90720db2d414433c8bed8e63 Jan 29 16:24:55.332990 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:55.333000 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:24:55.334116 systemd-networkd[784]: eth0: Link UP Jan 29 16:24:55.334124 systemd-networkd[784]: eth0: Gained carrier Jan 29 16:24:55.334133 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:55.338835 unknown[695]: fetched base config from "system" Jan 29 16:24:55.338846 unknown[695]: fetched user config from "qemu" Jan 29 16:24:55.339257 ignition[695]: fetch-offline: fetch-offline passed Jan 29 16:24:55.339336 ignition[695]: Ignition finished successfully Jan 29 16:24:55.345085 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:24:55.347904 systemd[1]: Reached target network.target - Network. Jan 29 16:24:55.350277 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:24:55.357462 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:24:55.359123 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:24:55.375213 ignition[788]: Ignition 2.20.0 Jan 29 16:24:55.375224 ignition[788]: Stage: kargs Jan 29 16:24:55.375380 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:55.375391 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:55.376205 ignition[788]: kargs: kargs passed Jan 29 16:24:55.376250 ignition[788]: Ignition finished successfully Jan 29 16:24:55.382264 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:24:55.395689 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:24:55.410097 ignition[798]: Ignition 2.20.0 Jan 29 16:24:55.410108 ignition[798]: Stage: disks Jan 29 16:24:55.410275 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:55.410286 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:55.411077 ignition[798]: disks: disks passed Jan 29 16:24:55.411122 ignition[798]: Ignition finished successfully Jan 29 16:24:55.417012 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:24:55.419369 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:24:55.421593 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:24:55.423955 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:24:55.424067 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:24:55.425996 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:24:55.438736 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:24:55.451702 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.142 Jan 29 16:24:55.451719 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jan 29 16:24:55.453598 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:24:55.460697 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:24:56.149511 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:24:56.236417 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:24:56.236711 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:24:56.237360 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:24:56.258545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:24:56.260468 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:24:56.262001 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:24:56.272287 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) Jan 29 16:24:56.272334 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:56.272349 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:56.272362 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:24:56.272375 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:24:56.262057 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:24:56.262088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:24:56.268437 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:24:56.273873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:24:56.277653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:24:56.317253 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:24:56.327776 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:24:56.335195 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:24:56.340552 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:24:56.481587 systemd-networkd[784]: eth0: Gained IPv6LL Jan 29 16:24:56.490919 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:24:56.514561 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:24:56.516631 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:24:56.523467 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:56.541007 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:24:56.543439 ignition[930]: INFO : Ignition 2.20.0 Jan 29 16:24:56.543439 ignition[930]: INFO : Stage: mount Jan 29 16:24:56.545157 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:56.545157 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:56.545157 ignition[930]: INFO : mount: mount passed Jan 29 16:24:56.545157 ignition[930]: INFO : Ignition finished successfully Jan 29 16:24:56.546753 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:24:56.558633 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:24:57.140929 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:24:57.157611 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:24:57.165425 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Jan 29 16:24:57.168030 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:57.168056 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:57.168068 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:24:57.171427 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:24:57.173030 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:24:57.192314 ignition[962]: INFO : Ignition 2.20.0 Jan 29 16:24:57.192314 ignition[962]: INFO : Stage: files Jan 29 16:24:57.194106 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:57.194106 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:57.196245 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:24:57.197661 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:24:57.197661 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:24:57.200852 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:24:57.202538 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:24:57.202538 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:24:57.201388 unknown[962]: wrote ssh authorized keys file for user: core Jan 29 16:24:57.206729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:24:57.206729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:24:57.254072 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:24:57.408976 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:24:57.408976 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:57.413788 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:24:57.878484 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 16:24:58.224074 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:58.224074 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 16:24:58.229232 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:24:58.290347 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:24:58.297825 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:24:58.300120 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:24:58.300120 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:24:58.300120 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:24:58.300120 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:24:58.300120 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:24:58.300120 ignition[962]: INFO : files: files passed Jan 29 16:24:58.300120 ignition[962]: INFO : Ignition finished successfully Jan 29 16:24:58.302133 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:24:58.324704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:24:58.331208 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:24:58.334689 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:24:58.334822 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:24:58.371738 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 16:24:58.377012 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:58.377012 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:58.382855 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:58.379972 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:24:58.383172 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:24:58.398686 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:24:58.438647 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:24:58.439955 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:24:58.446392 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:24:58.448958 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:24:58.451511 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:24:58.464691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:24:58.486862 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:24:58.501757 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:24:58.514274 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:58.516255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:58.523689 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:24:58.526592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:24:58.526801 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:24:58.533519 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:24:58.536544 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:24:58.547424 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:24:58.553418 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:24:58.555475 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:24:58.559386 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:24:58.561846 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:24:58.565081 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:24:58.567582 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:24:58.573377 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:24:58.576253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:24:58.576488 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:24:58.581328 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:58.581524 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:58.586153 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:24:58.586306 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:58.591278 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:24:58.591524 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:24:58.597837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:24:58.598074 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:24:58.599291 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:24:58.603434 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:24:58.608869 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:58.611378 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:24:58.615098 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:24:58.618730 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:24:58.618902 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:24:58.622647 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:24:58.622789 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:24:58.625255 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:24:58.627863 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:24:58.631063 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:24:58.632294 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:24:58.645876 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:24:58.650762 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:24:58.653272 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:24:58.654724 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:58.658201 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:24:58.658458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:24:58.666499 ignition[1017]: INFO : Ignition 2.20.0 Jan 29 16:24:58.667868 ignition[1017]: INFO : Stage: umount Jan 29 16:24:58.667868 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:58.667868 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:58.669939 ignition[1017]: INFO : umount: umount passed Jan 29 16:24:58.669939 ignition[1017]: INFO : Ignition finished successfully Jan 29 16:24:58.669040 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:24:58.669202 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:24:58.680875 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:24:58.681024 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:24:58.682487 systemd[1]: Stopped target network.target - Network. Jan 29 16:24:58.686177 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:24:58.686277 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:24:58.690132 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:24:58.690200 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:24:58.692913 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:24:58.692970 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:24:58.693082 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:24:58.693132 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:24:58.693887 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:24:58.694458 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:24:58.696035 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:24:58.708291 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:24:58.708479 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:24:58.714423 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:24:58.714780 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:24:58.714955 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:24:58.720891 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:24:58.722265 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:24:58.722340 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:58.734609 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:24:58.736133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:24:58.736274 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:24:58.738312 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:24:58.738410 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:58.744177 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:24:58.744279 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:58.748238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:24:58.748318 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:58.752469 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:58.755133 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:24:58.755226 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:24:58.771802 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:24:58.771961 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:24:58.774831 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:24:58.774998 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:58.778037 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:24:58.778110 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:58.778581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:24:58.778624 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:58.778995 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:24:58.779042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:24:58.785889 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:24:58.785964 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:24:58.786678 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:24:58.786734 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:58.843671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:24:58.844938 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:24:58.845036 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:58.848532 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:24:58.848615 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:24:58.849969 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:24:58.850020 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:58.850339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:24:58.850387 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:58.857956 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:24:58.858025 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:24:58.886287 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:24:58.886478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:24:58.933770 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:24:58.933921 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:24:58.935196 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:24:58.936796 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:24:58.936861 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:24:58.948674 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:24:58.960088 systemd[1]: Switching root. Jan 29 16:24:58.993459 systemd-journald[194]: Journal stopped Jan 29 16:25:00.369299 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 16:25:00.369371 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:25:00.369414 kernel: SELinux: policy capability open_perms=1 Jan 29 16:25:00.369430 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:25:00.369444 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:25:00.369463 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:25:00.369479 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:25:00.369494 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:25:00.369518 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:25:00.369533 kernel: audit: type=1403 audit(1738167899.504:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:25:00.369556 systemd[1]: Successfully loaded SELinux policy in 47.845ms. Jan 29 16:25:00.369597 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.618ms. Jan 29 16:25:00.369615 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:25:00.369632 systemd[1]: Detected virtualization kvm. Jan 29 16:25:00.369651 systemd[1]: Detected architecture x86-64. Jan 29 16:25:00.369667 systemd[1]: Detected first boot. Jan 29 16:25:00.369683 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:25:00.369699 zram_generator::config[1064]: No configuration found. Jan 29 16:25:00.369723 kernel: Guest personality initialized and is inactive Jan 29 16:25:00.369740 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:25:00.369755 kernel: Initialized host personality Jan 29 16:25:00.369770 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:25:00.369789 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:25:00.369806 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:25:00.369822 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:25:00.369838 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:25:00.369854 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:00.369870 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:25:00.369886 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:25:00.369902 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:25:00.369922 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:25:00.369938 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:25:00.369954 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:25:00.369970 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:25:00.369985 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:25:00.370001 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:00.370017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:00.370033 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:25:00.370048 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:25:00.370066 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:25:00.370085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:25:00.370102 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:25:00.370118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:00.370133 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:25:00.370149 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:25:00.370165 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:25:00.370181 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:25:00.370201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:00.370217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:25:00.370234 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:25:00.370250 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:25:00.370266 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:25:00.370282 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:25:00.370298 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:25:00.370314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:00.370330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:00.370349 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:00.370365 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:25:00.370381 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:25:00.370589 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:25:00.370609 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:25:00.370624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:00.370638 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:25:00.370653 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:25:00.370678 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:25:00.370699 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:25:00.370715 systemd[1]: Reached target machines.target - Containers. Jan 29 16:25:00.370733 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:25:00.370748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:00.370765 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:25:00.370781 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:25:00.370797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:00.370814 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:00.370834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:00.370851 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:25:00.370867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:00.370883 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:25:00.370900 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:25:00.370915 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:25:00.370930 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:25:00.370946 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:25:00.370963 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:00.370982 kernel: loop: module loaded Jan 29 16:25:00.370997 kernel: fuse: init (API version 7.39) Jan 29 16:25:00.371013 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:25:00.371030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:25:00.371046 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:25:00.371063 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:25:00.371081 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:25:00.371098 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:25:00.371117 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:25:00.371133 systemd[1]: Stopped verity-setup.service. Jan 29 16:25:00.371150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:00.371166 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:25:00.371180 kernel: ACPI: bus type drm_connector registered Jan 29 16:25:00.371200 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:25:00.371241 systemd-journald[1142]: Collecting audit messages is disabled. Jan 29 16:25:00.371271 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:25:00.371289 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:25:00.371305 systemd-journald[1142]: Journal started Jan 29 16:25:00.371335 systemd-journald[1142]: Runtime Journal (/run/log/journal/e6de786073224b53b3f4d9d148ad0358) is 6M, max 48.4M, 42.3M free. Jan 29 16:25:00.139287 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:25:00.152563 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:25:00.153078 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:25:00.373778 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:25:00.374615 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:25:00.375915 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:25:00.377383 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:25:00.379019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:00.380638 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:25:00.380863 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:25:00.382595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:00.382807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:00.384257 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:00.384486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:00.385917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:00.386130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:00.387863 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:25:00.388080 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:25:00.389729 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:00.389987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:00.391531 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:00.393212 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:25:00.394859 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:25:00.396683 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:25:00.412934 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:25:00.423485 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:25:00.425861 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:25:00.427125 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:25:00.427159 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:25:00.429218 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:25:00.431656 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:25:00.434002 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:25:00.435252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:00.438218 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:25:00.442635 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:25:00.444708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:00.454228 systemd-journald[1142]: Time spent on flushing to /var/log/journal/e6de786073224b53b3f4d9d148ad0358 is 24.948ms for 964 entries. Jan 29 16:25:00.454228 systemd-journald[1142]: System Journal (/var/log/journal/e6de786073224b53b3f4d9d148ad0358) is 8M, max 195.6M, 187.6M free. Jan 29 16:25:00.496615 systemd-journald[1142]: Received client request to flush runtime journal. Jan 29 16:25:00.450315 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:25:00.451789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:00.453181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:00.457156 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:25:00.462789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:25:00.466306 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:25:00.467758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:25:00.469334 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:25:00.487371 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:25:00.489264 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:00.492168 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:25:00.497518 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:25:00.499910 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:25:00.501632 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:25:00.507388 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:00.510431 kernel: loop0: detected capacity change from 0 to 147912 Jan 29 16:25:00.520930 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:25:00.524830 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 29 16:25:00.524855 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 29 16:25:00.530415 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:25:00.532295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:00.544439 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:25:00.547019 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:25:00.568447 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:25:00.575962 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:25:00.587671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:25:00.603272 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 16:25:00.604245 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 29 16:25:00.604266 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 29 16:25:00.612944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:00.639429 kernel: loop3: detected capacity change from 0 to 147912 Jan 29 16:25:00.653448 kernel: loop4: detected capacity change from 0 to 138176 Jan 29 16:25:00.667422 kernel: loop5: detected capacity change from 0 to 205544 Jan 29 16:25:00.674273 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 16:25:00.674986 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 29 16:25:00.679673 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:25:00.679698 systemd[1]: Reloading... Jan 29 16:25:00.749476 zram_generator::config[1240]: No configuration found. Jan 29 16:25:00.810601 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:25:00.888229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:00.952154 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:25:00.952805 systemd[1]: Reloading finished in 272 ms. Jan 29 16:25:00.974236 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:25:00.976068 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:25:00.994014 systemd[1]: Starting ensure-sysext.service... Jan 29 16:25:00.996438 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:25:01.015465 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:25:01.015482 systemd[1]: Reloading... Jan 29 16:25:01.018708 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:25:01.019072 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:25:01.020064 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:25:01.020386 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 29 16:25:01.020574 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 29 16:25:01.024477 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:01.024498 systemd-tmpfiles[1278]: Skipping /boot Jan 29 16:25:01.038810 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:01.038825 systemd-tmpfiles[1278]: Skipping /boot Jan 29 16:25:01.076457 zram_generator::config[1308]: No configuration found. Jan 29 16:25:01.189067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:01.254416 systemd[1]: Reloading finished in 238 ms. Jan 29 16:25:01.270103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:25:01.294602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:01.312657 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:01.315302 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:25:01.317881 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:25:01.322263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:25:01.325439 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:01.334795 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:25:01.339018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:01.339197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:01.340391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:01.344245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:01.350354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:01.351727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:01.351828 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:01.353875 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:25:01.355428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:01.357136 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:25:01.359925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:01.360187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:01.363287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:01.363548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:01.365946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:01.366287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:01.376175 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jan 29 16:25:01.381861 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:25:01.382869 augenrules[1379]: No rules Jan 29 16:25:01.384781 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:01.385133 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:01.394665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:01.404644 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:01.405989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:01.407933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:01.410930 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:01.414623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:01.422684 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:01.423969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:01.424085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:01.427699 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:25:01.429009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:01.430924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:01.432821 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:25:01.434841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:25:01.436942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:01.437151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:01.438882 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:01.439099 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:01.439557 augenrules[1386]: /sbin/augenrules: No change Jan 29 16:25:01.440648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:01.440855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:01.443262 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:01.443541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:01.445376 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:25:01.449649 augenrules[1429]: No rules Jan 29 16:25:01.452902 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:01.453171 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:01.460619 systemd[1]: Finished ensure-sysext.service. Jan 29 16:25:01.479577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:25:01.480894 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:01.480965 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:01.489531 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:25:01.490930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:25:01.491138 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:25:01.504438 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1411) Jan 29 16:25:01.525888 systemd-resolved[1350]: Positive Trust Anchors: Jan 29 16:25:01.525901 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:25:01.525933 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:25:01.526670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:25:01.530722 systemd-resolved[1350]: Defaulting to hostname 'linux'. Jan 29 16:25:01.538574 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:25:01.541252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:25:01.552879 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:01.559467 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:25:01.581867 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:25:01.583580 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:25:01.584521 systemd-networkd[1445]: lo: Link UP Jan 29 16:25:01.584527 systemd-networkd[1445]: lo: Gained carrier Jan 29 16:25:01.585817 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:25:01.586421 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:25:01.586644 systemd-networkd[1445]: Enumeration completed Jan 29 16:25:01.586965 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:25:01.588181 systemd[1]: Reached target network.target - Network. Jan 29 16:25:01.588628 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:01.588639 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:25:01.589896 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:01.589928 systemd-networkd[1445]: eth0: Link UP Jan 29 16:25:01.589932 systemd-networkd[1445]: eth0: Gained carrier Jan 29 16:25:01.589943 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:01.597563 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:25:01.600651 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:25:01.602542 systemd-networkd[1445]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:25:01.604869 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Jan 29 16:25:01.606266 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 16:25:01.606307 systemd-timesyncd[1446]: Initial clock synchronization to Wed 2025-01-29 16:25:01.317795 UTC. Jan 29 16:25:01.612796 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:25:01.614008 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:25:01.614208 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:25:01.615663 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:25:01.616186 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:25:01.638562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:01.646445 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:25:01.732493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:01.736616 kernel: kvm_amd: TSC scaling supported Jan 29 16:25:01.736679 kernel: kvm_amd: Nested Virtualization enabled Jan 29 16:25:01.736693 kernel: kvm_amd: Nested Paging enabled Jan 29 16:25:01.737597 kernel: kvm_amd: LBR virtualization supported Jan 29 16:25:01.737619 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 16:25:01.738599 kernel: kvm_amd: Virtual GIF supported Jan 29 16:25:01.756420 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:25:01.787930 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:25:01.800697 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:25:01.808972 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:01.838831 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:25:01.840472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:01.841645 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:25:01.842878 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:25:01.844181 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:25:01.845684 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:25:01.847143 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:25:01.848444 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:25:01.849728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:25:01.849756 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:25:01.850727 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:25:01.852655 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:25:01.855538 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:25:01.859110 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:25:01.860712 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:25:01.862026 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:25:01.865968 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:25:01.867514 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:25:01.870105 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:25:01.871866 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:25:01.873082 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:25:01.874089 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:25:01.875157 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:01.875189 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:01.876430 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:25:01.878492 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:01.878930 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:25:01.882092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:25:01.884723 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:25:01.885896 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:25:01.887200 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:25:01.894288 jq[1482]: false Jan 29 16:25:01.897538 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:25:01.900073 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:25:01.902477 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:25:01.911568 dbus-daemon[1481]: [system] SELinux support is enabled Jan 29 16:25:01.913568 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:25:01.915742 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:25:01.916719 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:25:01.919640 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:25:01.922613 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:25:01.923159 extend-filesystems[1483]: Found loop3 Jan 29 16:25:01.923159 extend-filesystems[1483]: Found loop4 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found loop5 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found sr0 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda1 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda2 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda3 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found usr Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda4 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda6 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda7 Jan 29 16:25:01.925595 extend-filesystems[1483]: Found vda9 Jan 29 16:25:01.925595 extend-filesystems[1483]: Checking size of /dev/vda9 Jan 29 16:25:01.925539 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:25:01.931889 jq[1498]: true Jan 29 16:25:01.937267 update_engine[1495]: I20250129 16:25:01.936760 1495 main.cc:92] Flatcar Update Engine starting Jan 29 16:25:01.938689 update_engine[1495]: I20250129 16:25:01.938659 1495 update_check_scheduler.cc:74] Next update check in 11m1s Jan 29 16:25:01.939150 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:25:01.939528 extend-filesystems[1483]: Resized partition /dev/vda9 Jan 29 16:25:01.945968 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:25:01.946255 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:25:01.946626 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:25:01.946870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:25:01.951536 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:25:01.957985 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:25:01.958260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:25:01.959418 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1395) Jan 29 16:25:01.963535 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 16:25:01.981770 jq[1507]: true Jan 29 16:25:01.990774 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:25:01.997288 tar[1506]: linux-amd64/helm Jan 29 16:25:01.997860 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:25:01.998094 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:25:02.002409 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 16:25:02.004943 systemd-logind[1494]: New seat seat0. Jan 29 16:25:02.008993 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:25:02.013033 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:25:02.020791 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:25:02.020952 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:25:02.022648 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:25:02.022755 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:25:02.028432 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:25:02.031752 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:25:02.031752 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:25:02.031752 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 16:25:02.037301 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Jan 29 16:25:02.031752 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:25:02.038454 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:25:02.041076 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:25:02.041314 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:25:02.043078 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:25:02.051504 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:25:02.061636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:25:02.071916 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:25:02.072595 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:25:02.079292 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:25:02.079591 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:25:02.087666 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:25:02.096178 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:25:02.100676 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:25:02.105593 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:25:02.107031 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:25:02.214389 containerd[1509]: time="2025-01-29T16:25:02.214303316Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:25:02.242704 containerd[1509]: time="2025-01-29T16:25:02.242627011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:02.244458 containerd[1509]: time="2025-01-29T16:25:02.244420381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:02.244503 containerd[1509]: time="2025-01-29T16:25:02.244457263Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:25:02.244503 containerd[1509]: time="2025-01-29T16:25:02.244478500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:25:02.244847 containerd[1509]: time="2025-01-29T16:25:02.244768180Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:25:02.244847 containerd[1509]: time="2025-01-29T16:25:02.244809862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:02.244895 containerd[1509]: time="2025-01-29T16:25:02.244878267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:02.244895 containerd[1509]: time="2025-01-29T16:25:02.244890079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245139 containerd[1509]: time="2025-01-29T16:25:02.245119668Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245139 containerd[1509]: time="2025-01-29T16:25:02.245138501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245194 containerd[1509]: time="2025-01-29T16:25:02.245151095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245194 containerd[1509]: time="2025-01-29T16:25:02.245160511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245328 containerd[1509]: time="2025-01-29T16:25:02.245246763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245527 containerd[1509]: time="2025-01-29T16:25:02.245499647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245674 containerd[1509]: time="2025-01-29T16:25:02.245656342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:02.245674 containerd[1509]: time="2025-01-29T16:25:02.245671466Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:25:02.245816 containerd[1509]: time="2025-01-29T16:25:02.245777169Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:25:02.245855 containerd[1509]: time="2025-01-29T16:25:02.245838388Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:25:02.251711 containerd[1509]: time="2025-01-29T16:25:02.251680593Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:25:02.251746 containerd[1509]: time="2025-01-29T16:25:02.251720228Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:25:02.251746 containerd[1509]: time="2025-01-29T16:25:02.251735545Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:25:02.251782 containerd[1509]: time="2025-01-29T16:25:02.251758945Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:25:02.251782 containerd[1509]: time="2025-01-29T16:25:02.251775856Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:25:02.251923 containerd[1509]: time="2025-01-29T16:25:02.251897020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252548947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252754036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252778788Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252801329Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252821205Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252840452Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252858009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252880628Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252901140Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252915646Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252933243Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.252987 containerd[1509]: time="2025-01-29T16:25:02.252953446Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:25:02.253346 containerd[1509]: time="2025-01-29T16:25:02.253318078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253379 containerd[1509]: time="2025-01-29T16:25:02.253369022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253426 containerd[1509]: time="2025-01-29T16:25:02.253389410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253426 containerd[1509]: time="2025-01-29T16:25:02.253416896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253469 containerd[1509]: time="2025-01-29T16:25:02.253433053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253469 containerd[1509]: time="2025-01-29T16:25:02.253452696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253469 containerd[1509]: time="2025-01-29T16:25:02.253465174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253521 containerd[1509]: time="2025-01-29T16:25:02.253486295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253521 containerd[1509]: time="2025-01-29T16:25:02.253503833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253561 containerd[1509]: time="2025-01-29T16:25:02.253522830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253561 containerd[1509]: time="2025-01-29T16:25:02.253539373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253561 containerd[1509]: time="2025-01-29T16:25:02.253554497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253617 containerd[1509]: time="2025-01-29T16:25:02.253569389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253617 containerd[1509]: time="2025-01-29T16:25:02.253586396Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:25:02.253617 containerd[1509]: time="2025-01-29T16:25:02.253613534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253667 containerd[1509]: time="2025-01-29T16:25:02.253631564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.253667 containerd[1509]: time="2025-01-29T16:25:02.253646080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:25:02.254543 containerd[1509]: time="2025-01-29T16:25:02.254497755Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:25:02.254587 containerd[1509]: time="2025-01-29T16:25:02.254569357Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:25:02.254611 containerd[1509]: time="2025-01-29T16:25:02.254591647Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:25:02.254630 containerd[1509]: time="2025-01-29T16:25:02.254612391Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:25:02.254651 containerd[1509]: time="2025-01-29T16:25:02.254628568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.254679 containerd[1509]: time="2025-01-29T16:25:02.254656594Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:25:02.254679 containerd[1509]: time="2025-01-29T16:25:02.254676267Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:25:02.254716 containerd[1509]: time="2025-01-29T16:25:02.254690560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:25:02.255142 containerd[1509]: time="2025-01-29T16:25:02.255068520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:25:02.255362 containerd[1509]: time="2025-01-29T16:25:02.255152242Z" level=info msg="Connect containerd service" Jan 29 16:25:02.255362 containerd[1509]: time="2025-01-29T16:25:02.255207735Z" level=info msg="using legacy CRI server" Jan 29 16:25:02.255362 containerd[1509]: time="2025-01-29T16:25:02.255221159Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:25:02.255451 containerd[1509]: time="2025-01-29T16:25:02.255377892Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:25:02.256332 containerd[1509]: time="2025-01-29T16:25:02.256280521Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:25:02.256454 containerd[1509]: time="2025-01-29T16:25:02.256417514Z" level=info msg="Start subscribing containerd event" Jan 29 16:25:02.256494 containerd[1509]: time="2025-01-29T16:25:02.256475498Z" level=info msg="Start recovering state" Jan 29 16:25:02.256578 containerd[1509]: time="2025-01-29T16:25:02.256555666Z" level=info msg="Start event monitor" Jan 29 16:25:02.256605 containerd[1509]: time="2025-01-29T16:25:02.256577763Z" level=info msg="Start snapshots syncer" Jan 29 16:25:02.256605 containerd[1509]: time="2025-01-29T16:25:02.256589101Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:25:02.256605 containerd[1509]: time="2025-01-29T16:25:02.256602882Z" level=info msg="Start streaming server" Jan 29 16:25:02.257088 containerd[1509]: time="2025-01-29T16:25:02.256975444Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:25:02.257180 containerd[1509]: time="2025-01-29T16:25:02.257162521Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:25:02.257433 containerd[1509]: time="2025-01-29T16:25:02.257227237Z" level=info msg="containerd successfully booted in 0.044764s" Jan 29 16:25:02.257318 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:25:02.309244 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:25:02.317640 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). Jan 29 16:25:02.366149 tar[1506]: linux-amd64/LICENSE Jan 29 16:25:02.366149 tar[1506]: linux-amd64/README.md Jan 29 16:25:02.370559 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:02.372371 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:02.378514 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:25:02.386115 systemd-logind[1494]: New session 1 of user core. Jan 29 16:25:02.387309 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:25:02.401634 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:25:02.413050 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:25:02.431634 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:25:02.435224 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:25:02.437315 systemd-logind[1494]: New session c1 of user core. Jan 29 16:25:02.578029 systemd[1577]: Queued start job for default target default.target. Jan 29 16:25:02.589583 systemd[1577]: Created slice app.slice - User Application Slice. Jan 29 16:25:02.589606 systemd[1577]: Reached target paths.target - Paths. Jan 29 16:25:02.589643 systemd[1577]: Reached target timers.target - Timers. Jan 29 16:25:02.591044 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:25:02.602021 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:25:02.602161 systemd[1577]: Reached target sockets.target - Sockets. Jan 29 16:25:02.602211 systemd[1577]: Reached target basic.target - Basic System. Jan 29 16:25:02.602265 systemd[1577]: Reached target default.target - Main User Target. Jan 29 16:25:02.602301 systemd[1577]: Startup finished in 158ms. Jan 29 16:25:02.602608 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:25:02.605069 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:25:02.670936 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:46126.service - OpenSSH per-connection server daemon (10.0.0.1:46126). Jan 29 16:25:02.712365 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 46126 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:02.713806 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:02.717450 systemd-logind[1494]: New session 2 of user core. Jan 29 16:25:02.733528 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:25:02.785939 sshd[1590]: Connection closed by 10.0.0.1 port 46126 Jan 29 16:25:02.786296 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:02.799000 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:46126.service: Deactivated successfully. Jan 29 16:25:02.800764 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:25:02.802040 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:25:02.803252 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:46130.service - OpenSSH per-connection server daemon (10.0.0.1:46130). Jan 29 16:25:02.805271 systemd-logind[1494]: Removed session 2. Jan 29 16:25:02.842156 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 46130 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:02.843541 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:02.847473 systemd-logind[1494]: New session 3 of user core. Jan 29 16:25:02.857515 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:25:02.908724 sshd[1598]: Connection closed by 10.0.0.1 port 46130 Jan 29 16:25:02.909056 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:02.912630 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:46130.service: Deactivated successfully. Jan 29 16:25:02.914325 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:25:02.914910 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:25:02.915694 systemd-logind[1494]: Removed session 3. Jan 29 16:25:03.201567 systemd-networkd[1445]: eth0: Gained IPv6LL Jan 29 16:25:03.204685 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:25:03.206574 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:25:03.218611 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:25:03.221039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:03.223199 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:25:03.240553 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:25:03.240862 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:25:03.242454 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:25:03.242967 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:25:03.804151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:03.805824 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:25:03.807089 systemd[1]: Startup finished in 730ms (kernel) + 6.787s (initrd) + 4.348s (userspace) = 11.866s. Jan 29 16:25:03.836717 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:04.223161 kubelet[1625]: E0129 16:25:04.223029 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:04.226919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:04.227135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:04.227538 systemd[1]: kubelet.service: Consumed 911ms CPU time, 238.2M memory peak. Jan 29 16:25:12.730183 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:46622.service - OpenSSH per-connection server daemon (10.0.0.1:46622). Jan 29 16:25:12.769282 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 46622 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:12.770681 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:12.774874 systemd-logind[1494]: New session 4 of user core. Jan 29 16:25:12.795596 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:25:12.848188 sshd[1641]: Connection closed by 10.0.0.1 port 46622 Jan 29 16:25:12.848591 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:12.860075 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:46622.service: Deactivated successfully. Jan 29 16:25:12.861829 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:25:12.863072 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:25:12.869640 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:46636.service - OpenSSH per-connection server daemon (10.0.0.1:46636). Jan 29 16:25:12.870729 systemd-logind[1494]: Removed session 4. Jan 29 16:25:12.905977 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 46636 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:12.907246 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:12.911419 systemd-logind[1494]: New session 5 of user core. Jan 29 16:25:12.922535 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:25:12.969757 sshd[1649]: Connection closed by 10.0.0.1 port 46636 Jan 29 16:25:12.970152 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:12.978143 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:46636.service: Deactivated successfully. Jan 29 16:25:12.980041 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:25:12.981325 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:25:12.997731 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:46652.service - OpenSSH per-connection server daemon (10.0.0.1:46652). Jan 29 16:25:12.998769 systemd-logind[1494]: Removed session 5. Jan 29 16:25:13.032757 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 46652 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:13.034174 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:13.038237 systemd-logind[1494]: New session 6 of user core. Jan 29 16:25:13.049520 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:25:13.102973 sshd[1657]: Connection closed by 10.0.0.1 port 46652 Jan 29 16:25:13.103349 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:13.118113 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:46652.service: Deactivated successfully. Jan 29 16:25:13.119935 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:25:13.121602 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:25:13.134657 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:46654.service - OpenSSH per-connection server daemon (10.0.0.1:46654). Jan 29 16:25:13.135576 systemd-logind[1494]: Removed session 6. Jan 29 16:25:13.170913 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 46654 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:13.172425 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:13.176720 systemd-logind[1494]: New session 7 of user core. Jan 29 16:25:13.186630 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:25:13.471051 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:25:13.471521 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:13.487611 sudo[1666]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:13.489529 sshd[1665]: Connection closed by 10.0.0.1 port 46654 Jan 29 16:25:13.489941 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:13.508031 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:46654.service: Deactivated successfully. Jan 29 16:25:13.509951 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:25:13.512136 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:25:13.522648 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:46660.service - OpenSSH per-connection server daemon (10.0.0.1:46660). Jan 29 16:25:13.523665 systemd-logind[1494]: Removed session 7. Jan 29 16:25:13.561212 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 46660 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:13.562812 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:13.567333 systemd-logind[1494]: New session 8 of user core. Jan 29 16:25:13.586547 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:25:13.641204 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:25:13.641647 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:13.645501 sudo[1676]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:13.651817 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:25:13.652150 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:13.676738 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:13.707823 augenrules[1698]: No rules Jan 29 16:25:13.709313 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:13.709616 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:13.710890 sudo[1675]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:13.712376 sshd[1674]: Connection closed by 10.0.0.1 port 46660 Jan 29 16:25:13.712815 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:13.724088 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:46660.service: Deactivated successfully. Jan 29 16:25:13.726115 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:25:13.727692 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:25:13.738635 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:46662.service - OpenSSH per-connection server daemon (10.0.0.1:46662). Jan 29 16:25:13.739599 systemd-logind[1494]: Removed session 8. Jan 29 16:25:13.775698 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 46662 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:13.777015 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:13.781227 systemd-logind[1494]: New session 9 of user core. Jan 29 16:25:13.794517 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:25:13.846283 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:25:13.846614 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:14.135663 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:25:14.135808 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:25:14.469125 dockerd[1729]: time="2025-01-29T16:25:14.468950980Z" level=info msg="Starting up" Jan 29 16:25:14.474171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:14.480577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:14.746696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:14.750508 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:14.876186 kubelet[1761]: E0129 16:25:14.876144 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:14.882284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:14.882547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:14.882907 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.8M memory peak. Jan 29 16:25:14.924685 dockerd[1729]: time="2025-01-29T16:25:14.924642983Z" level=info msg="Loading containers: start." Jan 29 16:25:15.085415 kernel: Initializing XFRM netlink socket Jan 29 16:25:15.161136 systemd-networkd[1445]: docker0: Link UP Jan 29 16:25:15.193787 dockerd[1729]: time="2025-01-29T16:25:15.193749291Z" level=info msg="Loading containers: done." Jan 29 16:25:15.206768 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck692175766-merged.mount: Deactivated successfully. Jan 29 16:25:15.208439 dockerd[1729]: time="2025-01-29T16:25:15.208369878Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:25:15.208525 dockerd[1729]: time="2025-01-29T16:25:15.208485517Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:25:15.208609 dockerd[1729]: time="2025-01-29T16:25:15.208590384Z" level=info msg="Daemon has completed initialization" Jan 29 16:25:15.240776 dockerd[1729]: time="2025-01-29T16:25:15.240705965Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:25:15.240876 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:25:16.085588 containerd[1509]: time="2025-01-29T16:25:16.085550936Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:25:17.420538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539959846.mount: Deactivated successfully. Jan 29 16:25:18.851809 containerd[1509]: time="2025-01-29T16:25:18.851729045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:18.852754 containerd[1509]: time="2025-01-29T16:25:18.852706987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 16:25:18.854709 containerd[1509]: time="2025-01-29T16:25:18.854658780Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:18.857773 containerd[1509]: time="2025-01-29T16:25:18.857722234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:18.858755 containerd[1509]: time="2025-01-29T16:25:18.858720667Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.77312896s" Jan 29 16:25:18.858755 containerd[1509]: time="2025-01-29T16:25:18.858755982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 16:25:18.860361 containerd[1509]: time="2025-01-29T16:25:18.860292478Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:25:20.267119 containerd[1509]: time="2025-01-29T16:25:20.267065074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:20.267874 containerd[1509]: time="2025-01-29T16:25:20.267819166Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 16:25:20.269083 containerd[1509]: time="2025-01-29T16:25:20.269050717Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:20.274269 containerd[1509]: time="2025-01-29T16:25:20.274244579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:20.275376 containerd[1509]: time="2025-01-29T16:25:20.275334775Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.415008787s" Jan 29 16:25:20.275376 containerd[1509]: time="2025-01-29T16:25:20.275366731Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 16:25:20.276047 containerd[1509]: time="2025-01-29T16:25:20.276011116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:25:21.748637 containerd[1509]: time="2025-01-29T16:25:21.748569097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:21.749390 containerd[1509]: time="2025-01-29T16:25:21.749324647Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 16:25:21.750531 containerd[1509]: time="2025-01-29T16:25:21.750488548Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:21.753267 containerd[1509]: time="2025-01-29T16:25:21.753224610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:21.754452 containerd[1509]: time="2025-01-29T16:25:21.754413146Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.478355724s" Jan 29 16:25:21.754515 containerd[1509]: time="2025-01-29T16:25:21.754449930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 16:25:21.754988 containerd[1509]: time="2025-01-29T16:25:21.754954137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:25:22.958866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263920544.mount: Deactivated successfully. Jan 29 16:25:23.632961 containerd[1509]: time="2025-01-29T16:25:23.632893978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:23.634125 containerd[1509]: time="2025-01-29T16:25:23.634080082Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 16:25:23.635185 containerd[1509]: time="2025-01-29T16:25:23.635146544Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:23.637733 containerd[1509]: time="2025-01-29T16:25:23.637704514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:23.638219 containerd[1509]: time="2025-01-29T16:25:23.638171001Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.883186888s" Jan 29 16:25:23.638219 containerd[1509]: time="2025-01-29T16:25:23.638214137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:25:23.638803 containerd[1509]: time="2025-01-29T16:25:23.638725300Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:25:24.141914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945139457.mount: Deactivated successfully. Jan 29 16:25:24.944467 containerd[1509]: time="2025-01-29T16:25:24.944410527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:24.945124 containerd[1509]: time="2025-01-29T16:25:24.945079943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 16:25:24.946280 containerd[1509]: time="2025-01-29T16:25:24.946242583Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:24.949054 containerd[1509]: time="2025-01-29T16:25:24.949026385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:24.950242 containerd[1509]: time="2025-01-29T16:25:24.950197554Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.311435342s" Jan 29 16:25:24.950282 containerd[1509]: time="2025-01-29T16:25:24.950246712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:25:24.950766 containerd[1509]: time="2025-01-29T16:25:24.950724266Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:25:25.019117 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:25:25.028569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:25.176621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:25.180991 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:25.330425 kubelet[2065]: E0129 16:25:25.330284 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:25.334669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:25.334884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:25.335255 systemd[1]: kubelet.service: Consumed 256ms CPU time, 98.1M memory peak. Jan 29 16:25:25.682040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844530105.mount: Deactivated successfully. Jan 29 16:25:25.687451 containerd[1509]: time="2025-01-29T16:25:25.687417113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:25.688132 containerd[1509]: time="2025-01-29T16:25:25.688089420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 16:25:25.689323 containerd[1509]: time="2025-01-29T16:25:25.689292476Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:25.691496 containerd[1509]: time="2025-01-29T16:25:25.691466211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:25.692189 containerd[1509]: time="2025-01-29T16:25:25.692151421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 741.396803ms" Jan 29 16:25:25.692231 containerd[1509]: time="2025-01-29T16:25:25.692187318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:25:25.692653 containerd[1509]: time="2025-01-29T16:25:25.692634949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:25:26.199363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891344732.mount: Deactivated successfully. Jan 29 16:25:28.121834 containerd[1509]: time="2025-01-29T16:25:28.121772158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:28.122755 containerd[1509]: time="2025-01-29T16:25:28.122722387Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 16:25:28.124384 containerd[1509]: time="2025-01-29T16:25:28.124357014Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:28.127321 containerd[1509]: time="2025-01-29T16:25:28.127264264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:28.128612 containerd[1509]: time="2025-01-29T16:25:28.128578408Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.435916056s" Jan 29 16:25:28.128672 containerd[1509]: time="2025-01-29T16:25:28.128612143Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 16:25:30.679021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:30.679190 systemd[1]: kubelet.service: Consumed 256ms CPU time, 98.1M memory peak. Jan 29 16:25:30.692597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:30.720634 systemd[1]: Reload requested from client PID 2162 ('systemctl') (unit session-9.scope)... Jan 29 16:25:30.720655 systemd[1]: Reloading... Jan 29 16:25:30.823429 zram_generator::config[2209]: No configuration found. Jan 29 16:25:31.103591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:31.205045 systemd[1]: Reloading finished in 483 ms. Jan 29 16:25:31.253035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:31.256524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:31.258483 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:25:31.258745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:31.258780 systemd[1]: kubelet.service: Consumed 137ms CPU time, 83.5M memory peak. Jan 29 16:25:31.260266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:31.418305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:31.422896 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:25:31.458598 kubelet[2256]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:31.458598 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:25:31.458598 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:31.458944 kubelet[2256]: I0129 16:25:31.458644 2256 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:25:31.705109 kubelet[2256]: I0129 16:25:31.704988 2256 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:25:31.705109 kubelet[2256]: I0129 16:25:31.705028 2256 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:25:31.705331 kubelet[2256]: I0129 16:25:31.705304 2256 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:25:31.724529 kubelet[2256]: I0129 16:25:31.724468 2256 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:25:31.725056 kubelet[2256]: E0129 16:25:31.724969 2256 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:31.729891 kubelet[2256]: E0129 16:25:31.729849 2256 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:25:31.729891 kubelet[2256]: I0129 16:25:31.729879 2256 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:25:31.735859 kubelet[2256]: I0129 16:25:31.735827 2256 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:25:31.736736 kubelet[2256]: I0129 16:25:31.736705 2256 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:25:31.736896 kubelet[2256]: I0129 16:25:31.736851 2256 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:25:31.737048 kubelet[2256]: I0129 16:25:31.736884 2256 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:25:31.737146 kubelet[2256]: I0129 16:25:31.737052 2256 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:25:31.737146 kubelet[2256]: I0129 16:25:31.737061 2256 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:25:31.737195 kubelet[2256]: I0129 16:25:31.737163 2256 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:31.738478 kubelet[2256]: I0129 16:25:31.738449 2256 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:25:31.738478 kubelet[2256]: I0129 16:25:31.738470 2256 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:25:31.738557 kubelet[2256]: I0129 16:25:31.738502 2256 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:25:31.738557 kubelet[2256]: I0129 16:25:31.738526 2256 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:25:31.742649 kubelet[2256]: W0129 16:25:31.742583 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:31.742698 kubelet[2256]: E0129 16:25:31.742664 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:31.743823 kubelet[2256]: W0129 16:25:31.743106 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:31.743823 kubelet[2256]: E0129 16:25:31.743144 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:31.745098 kubelet[2256]: I0129 16:25:31.744959 2256 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:25:31.746331 kubelet[2256]: I0129 16:25:31.746316 2256 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:25:31.747077 kubelet[2256]: W0129 16:25:31.747049 2256 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:25:31.748007 kubelet[2256]: I0129 16:25:31.747666 2256 server.go:1269] "Started kubelet" Jan 29 16:25:31.748822 kubelet[2256]: I0129 16:25:31.748632 2256 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:25:31.749269 kubelet[2256]: I0129 16:25:31.749246 2256 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:25:31.749788 kubelet[2256]: I0129 16:25:31.749720 2256 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:25:31.750447 kubelet[2256]: I0129 16:25:31.750419 2256 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:25:31.751413 kubelet[2256]: I0129 16:25:31.750543 2256 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:25:31.751413 kubelet[2256]: E0129 16:25:31.751295 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:31.751413 kubelet[2256]: I0129 16:25:31.751330 2256 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:25:31.751553 kubelet[2256]: I0129 16:25:31.751518 2256 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:25:31.751600 kubelet[2256]: I0129 16:25:31.751593 2256 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:25:31.751685 kubelet[2256]: I0129 16:25:31.751651 2256 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:25:31.752415 kubelet[2256]: W0129 16:25:31.751921 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:31.752415 kubelet[2256]: E0129 16:25:31.751959 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:31.752415 kubelet[2256]: E0129 16:25:31.752016 2256 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:25:31.752415 kubelet[2256]: E0129 16:25:31.752179 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Jan 29 16:25:31.752841 kubelet[2256]: I0129 16:25:31.752824 2256 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:25:31.752906 kubelet[2256]: I0129 16:25:31.752885 2256 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:25:31.753908 kubelet[2256]: I0129 16:25:31.753886 2256 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:25:31.755236 kubelet[2256]: E0129 16:25:31.753097 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f368653cb30ae default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:25:31.747643566 +0000 UTC m=+0.320552222,LastTimestamp:2025-01-29 16:25:31.747643566 +0000 UTC m=+0.320552222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:25:31.767563 kubelet[2256]: I0129 16:25:31.767474 2256 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:25:31.767563 kubelet[2256]: I0129 16:25:31.767554 2256 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:25:31.767563 kubelet[2256]: I0129 16:25:31.767570 2256 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:31.769390 kubelet[2256]: I0129 16:25:31.769359 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:25:31.772177 kubelet[2256]: I0129 16:25:31.772137 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:25:31.772227 kubelet[2256]: I0129 16:25:31.772190 2256 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:25:31.772227 kubelet[2256]: I0129 16:25:31.772215 2256 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:25:31.772298 kubelet[2256]: E0129 16:25:31.772277 2256 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:25:31.852339 kubelet[2256]: E0129 16:25:31.852283 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:31.872539 kubelet[2256]: E0129 16:25:31.872495 2256 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:25:31.952556 kubelet[2256]: E0129 16:25:31.952499 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:31.952722 kubelet[2256]: E0129 16:25:31.952637 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Jan 29 16:25:32.052921 kubelet[2256]: E0129 16:25:32.052866 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:32.053877 kubelet[2256]: W0129 16:25:32.053810 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:32.053942 kubelet[2256]: E0129 16:25:32.053873 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:32.054418 kubelet[2256]: I0129 16:25:32.054356 2256 policy_none.go:49] "None policy: Start" Jan 29 16:25:32.055002 kubelet[2256]: I0129 16:25:32.054988 2256 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:25:32.055109 kubelet[2256]: I0129 16:25:32.055009 2256 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:25:32.063600 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:25:32.073203 kubelet[2256]: E0129 16:25:32.073175 2256 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:25:32.081728 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:25:32.095603 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:25:32.096720 kubelet[2256]: I0129 16:25:32.096651 2256 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:25:32.096905 kubelet[2256]: I0129 16:25:32.096887 2256 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:25:32.096970 kubelet[2256]: I0129 16:25:32.096904 2256 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:25:32.097199 kubelet[2256]: I0129 16:25:32.097163 2256 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:25:32.098115 kubelet[2256]: E0129 16:25:32.098085 2256 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 16:25:32.198870 kubelet[2256]: I0129 16:25:32.198834 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:25:32.199170 kubelet[2256]: E0129 16:25:32.199132 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 29 16:25:32.353808 kubelet[2256]: E0129 16:25:32.353678 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Jan 29 16:25:32.400866 kubelet[2256]: I0129 16:25:32.400827 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:25:32.401107 kubelet[2256]: E0129 16:25:32.401084 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 29 16:25:32.481178 systemd[1]: Created slice kubepods-burstable-pod74f364bf30077267b95c5fd908aee5ba.slice - libcontainer container kubepods-burstable-pod74f364bf30077267b95c5fd908aee5ba.slice. Jan 29 16:25:32.492032 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 16:25:32.513270 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 16:25:32.556113 kubelet[2256]: I0129 16:25:32.556070 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74f364bf30077267b95c5fd908aee5ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74f364bf30077267b95c5fd908aee5ba\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:32.556113 kubelet[2256]: I0129 16:25:32.556099 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74f364bf30077267b95c5fd908aee5ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74f364bf30077267b95c5fd908aee5ba\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:32.556460 kubelet[2256]: I0129 16:25:32.556117 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:32.556460 kubelet[2256]: I0129 16:25:32.556131 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74f364bf30077267b95c5fd908aee5ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74f364bf30077267b95c5fd908aee5ba\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:32.556460 kubelet[2256]: I0129 16:25:32.556153 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:32.556460 kubelet[2256]: I0129 16:25:32.556175 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:32.556460 kubelet[2256]: I0129 16:25:32.556191 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:32.556582 kubelet[2256]: I0129 16:25:32.556207 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:32.556582 kubelet[2256]: I0129 16:25:32.556221 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:32.579591 kubelet[2256]: W0129 16:25:32.579539 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:32.579647 kubelet[2256]: E0129 16:25:32.579592 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:32.587104 kubelet[2256]: W0129 16:25:32.587053 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:32.587153 kubelet[2256]: E0129 16:25:32.587103 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:32.791183 kubelet[2256]: E0129 16:25:32.791154 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:32.791658 containerd[1509]: time="2025-01-29T16:25:32.791614107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74f364bf30077267b95c5fd908aee5ba,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:32.802656 kubelet[2256]: I0129 16:25:32.802626 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:25:32.802986 kubelet[2256]: E0129 16:25:32.802955 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 29 16:25:32.811140 kubelet[2256]: E0129 16:25:32.811099 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:32.811515 containerd[1509]: time="2025-01-29T16:25:32.811479198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:32.815687 kubelet[2256]: E0129 16:25:32.815648 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:32.815921 containerd[1509]: time="2025-01-29T16:25:32.815897667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:33.154337 kubelet[2256]: E0129 16:25:33.154165 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Jan 29 16:25:33.158513 kubelet[2256]: W0129 16:25:33.158451 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:33.158513 kubelet[2256]: E0129 16:25:33.158508 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:33.536089 kubelet[2256]: W0129 16:25:33.536022 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 29 16:25:33.536089 kubelet[2256]: E0129 16:25:33.536089 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:33.605179 kubelet[2256]: I0129 16:25:33.605133 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:25:33.605635 kubelet[2256]: E0129 16:25:33.605467 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 29 16:25:33.624471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950986731.mount: Deactivated successfully. Jan 29 16:25:33.630335 containerd[1509]: time="2025-01-29T16:25:33.630301532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:33.633707 containerd[1509]: time="2025-01-29T16:25:33.633626130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:25:33.634767 containerd[1509]: time="2025-01-29T16:25:33.634729096Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:33.637385 containerd[1509]: time="2025-01-29T16:25:33.637345201Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:33.638236 containerd[1509]: time="2025-01-29T16:25:33.638160373Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:25:33.639408 containerd[1509]: time="2025-01-29T16:25:33.639368736Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:33.640238 containerd[1509]: time="2025-01-29T16:25:33.640189424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:25:33.643200 containerd[1509]: time="2025-01-29T16:25:33.643168041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:33.644864 containerd[1509]: time="2025-01-29T16:25:33.644828513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 828.875637ms" Jan 29 16:25:33.645579 containerd[1509]: time="2025-01-29T16:25:33.645539458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 833.950354ms" Jan 29 16:25:33.646490 containerd[1509]: time="2025-01-29T16:25:33.646455681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 854.752465ms" Jan 29 16:25:33.774542 kubelet[2256]: E0129 16:25:33.774499 2256 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:33.803555 containerd[1509]: time="2025-01-29T16:25:33.802835297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:33.803555 containerd[1509]: time="2025-01-29T16:25:33.802947445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:33.803555 containerd[1509]: time="2025-01-29T16:25:33.802975722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:33.803555 containerd[1509]: time="2025-01-29T16:25:33.803065752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:33.804021 containerd[1509]: time="2025-01-29T16:25:33.803632772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:33.804021 containerd[1509]: time="2025-01-29T16:25:33.803914996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:33.804021 containerd[1509]: time="2025-01-29T16:25:33.803947499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:33.804160 containerd[1509]: time="2025-01-29T16:25:33.804125114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:33.804201 containerd[1509]: time="2025-01-29T16:25:33.802276708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:33.804355 containerd[1509]: time="2025-01-29T16:25:33.804292425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:33.804355 containerd[1509]: time="2025-01-29T16:25:33.804312552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:33.804902 containerd[1509]: time="2025-01-29T16:25:33.804574378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:33.829538 systemd[1]: Started cri-containerd-4ea6ef068b1da07c7903709290b1e9e13c50a278be2f12c130ec1bb1ef5eb3df.scope - libcontainer container 4ea6ef068b1da07c7903709290b1e9e13c50a278be2f12c130ec1bb1ef5eb3df. Jan 29 16:25:33.834351 systemd[1]: Started cri-containerd-76fdf47280e50d8330e5ba5955b651c6a1922d8c5c1b2e971301f658356b6c3a.scope - libcontainer container 76fdf47280e50d8330e5ba5955b651c6a1922d8c5c1b2e971301f658356b6c3a. Jan 29 16:25:33.836317 systemd[1]: Started cri-containerd-abdc180aa966dc318f83c44811a948654dbbe7b0260d5b5fbca571aa896a5cab.scope - libcontainer container abdc180aa966dc318f83c44811a948654dbbe7b0260d5b5fbca571aa896a5cab. Jan 29 16:25:33.873870 containerd[1509]: time="2025-01-29T16:25:33.873821455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74f364bf30077267b95c5fd908aee5ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ea6ef068b1da07c7903709290b1e9e13c50a278be2f12c130ec1bb1ef5eb3df\"" Jan 29 16:25:33.874760 kubelet[2256]: E0129 16:25:33.874722 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:33.878350 containerd[1509]: time="2025-01-29T16:25:33.878307801Z" level=info msg="CreateContainer within sandbox \"4ea6ef068b1da07c7903709290b1e9e13c50a278be2f12c130ec1bb1ef5eb3df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:25:33.880339 containerd[1509]: time="2025-01-29T16:25:33.880306035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"76fdf47280e50d8330e5ba5955b651c6a1922d8c5c1b2e971301f658356b6c3a\"" Jan 29 16:25:33.881232 kubelet[2256]: E0129 16:25:33.881183 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:33.883002 containerd[1509]: time="2025-01-29T16:25:33.882965885Z" level=info msg="CreateContainer within sandbox \"76fdf47280e50d8330e5ba5955b651c6a1922d8c5c1b2e971301f658356b6c3a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:25:33.886685 containerd[1509]: time="2025-01-29T16:25:33.886639355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"abdc180aa966dc318f83c44811a948654dbbe7b0260d5b5fbca571aa896a5cab\"" Jan 29 16:25:33.887259 kubelet[2256]: E0129 16:25:33.887216 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:33.889258 containerd[1509]: time="2025-01-29T16:25:33.889214843Z" level=info msg="CreateContainer within sandbox \"abdc180aa966dc318f83c44811a948654dbbe7b0260d5b5fbca571aa896a5cab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:25:33.912202 containerd[1509]: time="2025-01-29T16:25:33.912063356Z" level=info msg="CreateContainer within sandbox \"4ea6ef068b1da07c7903709290b1e9e13c50a278be2f12c130ec1bb1ef5eb3df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"233c3d831f9206b9930592a33cf1c585c5202d653befe9c5ad08ab8167035742\"" Jan 29 16:25:33.912951 containerd[1509]: time="2025-01-29T16:25:33.912697465Z" level=info msg="StartContainer for \"233c3d831f9206b9930592a33cf1c585c5202d653befe9c5ad08ab8167035742\"" Jan 29 16:25:33.919732 containerd[1509]: time="2025-01-29T16:25:33.919686633Z" level=info msg="CreateContainer within sandbox \"76fdf47280e50d8330e5ba5955b651c6a1922d8c5c1b2e971301f658356b6c3a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3053fb2b8e4a97d1f2d9eab0d0c9a425c05cf02d6f19c7dcd89b3728e288d96f\"" Jan 29 16:25:33.920340 containerd[1509]: time="2025-01-29T16:25:33.920319660Z" level=info msg="StartContainer for \"3053fb2b8e4a97d1f2d9eab0d0c9a425c05cf02d6f19c7dcd89b3728e288d96f\"" Jan 29 16:25:33.925132 containerd[1509]: time="2025-01-29T16:25:33.925078698Z" level=info msg="CreateContainer within sandbox \"abdc180aa966dc318f83c44811a948654dbbe7b0260d5b5fbca571aa896a5cab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b1e2391a9516d9fbf647cc989f3b69f158f7d48de97a543819fe93059cb8cb3\"" Jan 29 16:25:33.926172 containerd[1509]: time="2025-01-29T16:25:33.925614125Z" level=info msg="StartContainer for \"9b1e2391a9516d9fbf647cc989f3b69f158f7d48de97a543819fe93059cb8cb3\"" Jan 29 16:25:33.942693 systemd[1]: Started cri-containerd-233c3d831f9206b9930592a33cf1c585c5202d653befe9c5ad08ab8167035742.scope - libcontainer container 233c3d831f9206b9930592a33cf1c585c5202d653befe9c5ad08ab8167035742. Jan 29 16:25:33.955556 systemd[1]: Started cri-containerd-3053fb2b8e4a97d1f2d9eab0d0c9a425c05cf02d6f19c7dcd89b3728e288d96f.scope - libcontainer container 3053fb2b8e4a97d1f2d9eab0d0c9a425c05cf02d6f19c7dcd89b3728e288d96f. Jan 29 16:25:33.958627 systemd[1]: Started cri-containerd-9b1e2391a9516d9fbf647cc989f3b69f158f7d48de97a543819fe93059cb8cb3.scope - libcontainer container 9b1e2391a9516d9fbf647cc989f3b69f158f7d48de97a543819fe93059cb8cb3. Jan 29 16:25:34.002206 containerd[1509]: time="2025-01-29T16:25:34.002097839Z" level=info msg="StartContainer for \"9b1e2391a9516d9fbf647cc989f3b69f158f7d48de97a543819fe93059cb8cb3\" returns successfully" Jan 29 16:25:34.002742 containerd[1509]: time="2025-01-29T16:25:34.002658328Z" level=info msg="StartContainer for \"233c3d831f9206b9930592a33cf1c585c5202d653befe9c5ad08ab8167035742\" returns successfully" Jan 29 16:25:34.008892 containerd[1509]: time="2025-01-29T16:25:34.008823917Z" level=info msg="StartContainer for \"3053fb2b8e4a97d1f2d9eab0d0c9a425c05cf02d6f19c7dcd89b3728e288d96f\" returns successfully" Jan 29 16:25:34.794012 kubelet[2256]: E0129 16:25:34.793965 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.796372 kubelet[2256]: E0129 16:25:34.796332 2256 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 16:25:34.798516 kubelet[2256]: E0129 16:25:34.798473 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.800701 kubelet[2256]: E0129 16:25:34.800655 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:35.126890 kubelet[2256]: E0129 16:25:35.126766 2256 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 16:25:35.207631 kubelet[2256]: I0129 16:25:35.207577 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:25:35.213643 kubelet[2256]: I0129 16:25:35.213607 2256 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 16:25:35.744108 kubelet[2256]: I0129 16:25:35.744059 2256 apiserver.go:52] "Watching apiserver" Jan 29 16:25:35.752331 kubelet[2256]: I0129 16:25:35.752311 2256 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:25:35.807731 kubelet[2256]: E0129 16:25:35.807624 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:36.801937 kubelet[2256]: E0129 16:25:36.801893 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:36.862287 systemd[1]: Reload requested from client PID 2534 ('systemctl') (unit session-9.scope)... Jan 29 16:25:36.862303 systemd[1]: Reloading... Jan 29 16:25:36.939425 zram_generator::config[2578]: No configuration found. Jan 29 16:25:37.047159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:37.162444 systemd[1]: Reloading finished in 299 ms. Jan 29 16:25:37.188850 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:37.202900 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:25:37.203231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:37.203286 systemd[1]: kubelet.service: Consumed 786ms CPU time, 118.5M memory peak. Jan 29 16:25:37.210894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:37.380906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:37.386134 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:25:37.424484 kubelet[2623]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:37.424484 kubelet[2623]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:25:37.424484 kubelet[2623]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:37.424484 kubelet[2623]: I0129 16:25:37.424113 2623 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:25:37.431508 kubelet[2623]: I0129 16:25:37.431475 2623 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:25:37.431508 kubelet[2623]: I0129 16:25:37.431499 2623 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:25:37.431738 kubelet[2623]: I0129 16:25:37.431716 2623 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:25:37.432988 kubelet[2623]: I0129 16:25:37.432965 2623 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:25:37.434709 kubelet[2623]: I0129 16:25:37.434663 2623 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:25:37.437636 kubelet[2623]: E0129 16:25:37.437607 2623 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:25:37.437636 kubelet[2623]: I0129 16:25:37.437637 2623 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:25:37.442259 kubelet[2623]: I0129 16:25:37.442209 2623 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:25:37.442347 kubelet[2623]: I0129 16:25:37.442318 2623 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:25:37.442495 kubelet[2623]: I0129 16:25:37.442454 2623 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:25:37.442653 kubelet[2623]: I0129 16:25:37.442483 2623 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:25:37.442653 kubelet[2623]: I0129 16:25:37.442648 2623 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:25:37.442653 kubelet[2623]: I0129 16:25:37.442664 2623 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:25:37.442823 kubelet[2623]: I0129 16:25:37.442696 2623 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:37.442823 kubelet[2623]: I0129 16:25:37.442800 2623 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:25:37.442823 kubelet[2623]: I0129 16:25:37.442810 2623 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:25:37.442915 kubelet[2623]: I0129 16:25:37.442840 2623 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:25:37.442915 kubelet[2623]: I0129 16:25:37.442851 2623 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:25:37.444710 kubelet[2623]: I0129 16:25:37.443896 2623 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:25:37.444710 kubelet[2623]: I0129 16:25:37.444275 2623 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:25:37.446457 kubelet[2623]: I0129 16:25:37.445685 2623 server.go:1269] "Started kubelet" Jan 29 16:25:37.448656 kubelet[2623]: I0129 16:25:37.448634 2623 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:25:37.451595 kubelet[2623]: I0129 16:25:37.451559 2623 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:25:37.454228 kubelet[2623]: I0129 16:25:37.453747 2623 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:25:37.454826 kubelet[2623]: I0129 16:25:37.454797 2623 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:25:37.455077 kubelet[2623]: I0129 16:25:37.455046 2623 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:25:37.457060 kubelet[2623]: I0129 16:25:37.455902 2623 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:25:37.457060 kubelet[2623]: I0129 16:25:37.456815 2623 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:25:37.457060 kubelet[2623]: I0129 16:25:37.456898 2623 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:25:37.457761 kubelet[2623]: I0129 16:25:37.457742 2623 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:25:37.458136 kubelet[2623]: I0129 16:25:37.458115 2623 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:25:37.459175 kubelet[2623]: I0129 16:25:37.459134 2623 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:25:37.459317 kubelet[2623]: I0129 16:25:37.459298 2623 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:25:37.459904 kubelet[2623]: E0129 16:25:37.459874 2623 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:25:37.469316 kubelet[2623]: I0129 16:25:37.469280 2623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:25:37.470519 kubelet[2623]: I0129 16:25:37.470497 2623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:25:37.470577 kubelet[2623]: I0129 16:25:37.470527 2623 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:25:37.470577 kubelet[2623]: I0129 16:25:37.470543 2623 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:25:37.470633 kubelet[2623]: E0129 16:25:37.470579 2623 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:25:37.493076 kubelet[2623]: I0129 16:25:37.493031 2623 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:25:37.493076 kubelet[2623]: I0129 16:25:37.493072 2623 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:25:37.493240 kubelet[2623]: I0129 16:25:37.493094 2623 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:37.493388 kubelet[2623]: I0129 16:25:37.493355 2623 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:25:37.493440 kubelet[2623]: I0129 16:25:37.493378 2623 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:25:37.493440 kubelet[2623]: I0129 16:25:37.493424 2623 policy_none.go:49] "None policy: Start" Jan 29 16:25:37.493970 kubelet[2623]: I0129 16:25:37.493948 2623 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:25:37.494016 kubelet[2623]: I0129 16:25:37.493975 2623 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:25:37.494177 kubelet[2623]: I0129 16:25:37.494158 2623 state_mem.go:75] "Updated machine memory state" Jan 29 16:25:37.498067 kubelet[2623]: I0129 16:25:37.498037 2623 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:25:37.498275 kubelet[2623]: I0129 16:25:37.498247 2623 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:25:37.498328 kubelet[2623]: I0129 16:25:37.498269 2623 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:25:37.498566 kubelet[2623]: I0129 16:25:37.498511 2623 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:25:37.579587 kubelet[2623]: E0129 16:25:37.579546 2623 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:37.605752 kubelet[2623]: I0129 16:25:37.605724 2623 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:25:37.612705 kubelet[2623]: I0129 16:25:37.612656 2623 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 16:25:37.612825 kubelet[2623]: I0129 16:25:37.612764 2623 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 16:25:37.661138 kubelet[2623]: I0129 16:25:37.661093 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74f364bf30077267b95c5fd908aee5ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74f364bf30077267b95c5fd908aee5ba\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:37.661138 kubelet[2623]: I0129 16:25:37.661128 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74f364bf30077267b95c5fd908aee5ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74f364bf30077267b95c5fd908aee5ba\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:37.661327 kubelet[2623]: I0129 16:25:37.661152 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:37.661327 kubelet[2623]: I0129 16:25:37.661173 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74f364bf30077267b95c5fd908aee5ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74f364bf30077267b95c5fd908aee5ba\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:37.661327 kubelet[2623]: I0129 16:25:37.661196 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:37.661327 kubelet[2623]: I0129 16:25:37.661214 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:37.661327 kubelet[2623]: I0129 16:25:37.661235 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:37.661476 kubelet[2623]: I0129 16:25:37.661282 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:37.661476 kubelet[2623]: I0129 16:25:37.661303 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:37.878305 kubelet[2623]: E0129 16:25:37.878273 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:37.880542 kubelet[2623]: E0129 16:25:37.880506 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:37.880631 kubelet[2623]: E0129 16:25:37.880566 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.443538 kubelet[2623]: I0129 16:25:38.443492 2623 apiserver.go:52] "Watching apiserver" Jan 29 16:25:38.459752 kubelet[2623]: I0129 16:25:38.459699 2623 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:25:38.481419 kubelet[2623]: E0129 16:25:38.481055 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.481419 kubelet[2623]: E0129 16:25:38.481113 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.523317 kubelet[2623]: E0129 16:25:38.523282 2623 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:38.523509 kubelet[2623]: E0129 16:25:38.523498 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.541020 kubelet[2623]: I0129 16:25:38.540858 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.540842561 podStartE2EDuration="1.540842561s" podCreationTimestamp="2025-01-29 16:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:38.539160007 +0000 UTC m=+1.148656717" watchObservedRunningTime="2025-01-29 16:25:38.540842561 +0000 UTC m=+1.150339271" Jan 29 16:25:38.554892 kubelet[2623]: I0129 16:25:38.554680 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.554662619 podStartE2EDuration="3.554662619s" podCreationTimestamp="2025-01-29 16:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:38.545537526 +0000 UTC m=+1.155034226" watchObservedRunningTime="2025-01-29 16:25:38.554662619 +0000 UTC m=+1.164159330" Jan 29 16:25:38.554892 kubelet[2623]: I0129 16:25:38.554795 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.554790439 podStartE2EDuration="1.554790439s" podCreationTimestamp="2025-01-29 16:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:38.554631697 +0000 UTC m=+1.164128407" watchObservedRunningTime="2025-01-29 16:25:38.554790439 +0000 UTC m=+1.164287149" Jan 29 16:25:39.482166 kubelet[2623]: E0129 16:25:39.482118 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:39.482613 kubelet[2623]: E0129 16:25:39.482234 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:41.772832 kubelet[2623]: E0129 16:25:41.772793 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:41.823883 kubelet[2623]: I0129 16:25:41.823835 2623 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:25:41.824215 containerd[1509]: time="2025-01-29T16:25:41.824156384Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:25:41.824859 kubelet[2623]: I0129 16:25:41.824380 2623 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:25:41.895238 sudo[1710]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:41.896583 sshd[1709]: Connection closed by 10.0.0.1 port 46662 Jan 29 16:25:41.897056 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:41.900672 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:46662.service: Deactivated successfully. Jan 29 16:25:41.902951 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:25:41.903172 systemd[1]: session-9.scope: Consumed 4.639s CPU time, 213.9M memory peak. Jan 29 16:25:41.904422 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:25:41.905292 systemd-logind[1494]: Removed session 9. Jan 29 16:25:42.609590 systemd[1]: Created slice kubepods-besteffort-pod0aa83639_fb41_4908_a90a_f06bcd306146.slice - libcontainer container kubepods-besteffort-pod0aa83639_fb41_4908_a90a_f06bcd306146.slice. Jan 29 16:25:42.691197 kubelet[2623]: I0129 16:25:42.691150 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aa83639-fb41-4908-a90a-f06bcd306146-xtables-lock\") pod \"kube-proxy-st79r\" (UID: \"0aa83639-fb41-4908-a90a-f06bcd306146\") " pod="kube-system/kube-proxy-st79r" Jan 29 16:25:42.691197 kubelet[2623]: I0129 16:25:42.691184 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aa83639-fb41-4908-a90a-f06bcd306146-lib-modules\") pod \"kube-proxy-st79r\" (UID: \"0aa83639-fb41-4908-a90a-f06bcd306146\") " pod="kube-system/kube-proxy-st79r" Jan 29 16:25:42.691197 kubelet[2623]: I0129 16:25:42.691203 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qssdj\" (UniqueName: \"kubernetes.io/projected/0aa83639-fb41-4908-a90a-f06bcd306146-kube-api-access-qssdj\") pod \"kube-proxy-st79r\" (UID: \"0aa83639-fb41-4908-a90a-f06bcd306146\") " pod="kube-system/kube-proxy-st79r" Jan 29 16:25:42.691453 kubelet[2623]: I0129 16:25:42.691220 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0aa83639-fb41-4908-a90a-f06bcd306146-kube-proxy\") pod \"kube-proxy-st79r\" (UID: \"0aa83639-fb41-4908-a90a-f06bcd306146\") " pod="kube-system/kube-proxy-st79r" Jan 29 16:25:42.822053 systemd[1]: Created slice kubepods-besteffort-podd309120b_059f_42b7_9e1b_36b25e9f5b69.slice - libcontainer container kubepods-besteffort-podd309120b_059f_42b7_9e1b_36b25e9f5b69.slice. Jan 29 16:25:42.894386 kubelet[2623]: I0129 16:25:42.894274 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d309120b-059f-42b7-9e1b-36b25e9f5b69-var-lib-calico\") pod \"tigera-operator-76c4976dd7-xfzzc\" (UID: \"d309120b-059f-42b7-9e1b-36b25e9f5b69\") " pod="tigera-operator/tigera-operator-76c4976dd7-xfzzc" Jan 29 16:25:42.894386 kubelet[2623]: I0129 16:25:42.894313 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzcr\" (UniqueName: \"kubernetes.io/projected/d309120b-059f-42b7-9e1b-36b25e9f5b69-kube-api-access-vnzcr\") pod \"tigera-operator-76c4976dd7-xfzzc\" (UID: \"d309120b-059f-42b7-9e1b-36b25e9f5b69\") " pod="tigera-operator/tigera-operator-76c4976dd7-xfzzc" Jan 29 16:25:42.923526 kubelet[2623]: E0129 16:25:42.923504 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:42.924043 containerd[1509]: time="2025-01-29T16:25:42.923999298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-st79r,Uid:0aa83639-fb41-4908-a90a-f06bcd306146,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:42.953918 containerd[1509]: time="2025-01-29T16:25:42.953824282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:42.953918 containerd[1509]: time="2025-01-29T16:25:42.953881476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:42.953918 containerd[1509]: time="2025-01-29T16:25:42.953894543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:42.954215 containerd[1509]: time="2025-01-29T16:25:42.954115224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:42.978526 systemd[1]: Started cri-containerd-a32fe5e1d7f03236cb441d90b74c1e515f12656db3b2788d33a7c8b286ec9046.scope - libcontainer container a32fe5e1d7f03236cb441d90b74c1e515f12656db3b2788d33a7c8b286ec9046. Jan 29 16:25:43.000362 containerd[1509]: time="2025-01-29T16:25:43.000322142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-st79r,Uid:0aa83639-fb41-4908-a90a-f06bcd306146,Namespace:kube-system,Attempt:0,} returns sandbox id \"a32fe5e1d7f03236cb441d90b74c1e515f12656db3b2788d33a7c8b286ec9046\"" Jan 29 16:25:43.001028 kubelet[2623]: E0129 16:25:43.001007 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:43.004329 containerd[1509]: time="2025-01-29T16:25:43.004273782Z" level=info msg="CreateContainer within sandbox \"a32fe5e1d7f03236cb441d90b74c1e515f12656db3b2788d33a7c8b286ec9046\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:25:43.124781 containerd[1509]: time="2025-01-29T16:25:43.124740853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xfzzc,Uid:d309120b-059f-42b7-9e1b-36b25e9f5b69,Namespace:tigera-operator,Attempt:0,}" Jan 29 16:25:43.346154 containerd[1509]: time="2025-01-29T16:25:43.346097358Z" level=info msg="CreateContainer within sandbox \"a32fe5e1d7f03236cb441d90b74c1e515f12656db3b2788d33a7c8b286ec9046\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41132292dfdbe05827d6a74ea17f2930c512e42b580172c2aeb05dde3aada210\"" Jan 29 16:25:43.347898 containerd[1509]: time="2025-01-29T16:25:43.347586575Z" level=info msg="StartContainer for \"41132292dfdbe05827d6a74ea17f2930c512e42b580172c2aeb05dde3aada210\"" Jan 29 16:25:43.371931 containerd[1509]: time="2025-01-29T16:25:43.371277127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:43.371931 containerd[1509]: time="2025-01-29T16:25:43.371368037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:43.371931 containerd[1509]: time="2025-01-29T16:25:43.371378890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:43.372426 containerd[1509]: time="2025-01-29T16:25:43.372019736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:43.379650 systemd[1]: Started cri-containerd-41132292dfdbe05827d6a74ea17f2930c512e42b580172c2aeb05dde3aada210.scope - libcontainer container 41132292dfdbe05827d6a74ea17f2930c512e42b580172c2aeb05dde3aada210. Jan 29 16:25:43.394558 systemd[1]: Started cri-containerd-c46eab154ac3aa64e52908198a6f439c04be308cbcaf7d740a919670ceeb5757.scope - libcontainer container c46eab154ac3aa64e52908198a6f439c04be308cbcaf7d740a919670ceeb5757. Jan 29 16:25:43.418981 containerd[1509]: time="2025-01-29T16:25:43.418919981Z" level=info msg="StartContainer for \"41132292dfdbe05827d6a74ea17f2930c512e42b580172c2aeb05dde3aada210\" returns successfully" Jan 29 16:25:43.437142 containerd[1509]: time="2025-01-29T16:25:43.437105324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xfzzc,Uid:d309120b-059f-42b7-9e1b-36b25e9f5b69,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c46eab154ac3aa64e52908198a6f439c04be308cbcaf7d740a919670ceeb5757\"" Jan 29 16:25:43.439174 containerd[1509]: time="2025-01-29T16:25:43.438908266Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 16:25:43.489768 kubelet[2623]: E0129 16:25:43.489739 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:43.499505 kubelet[2623]: I0129 16:25:43.499436 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-st79r" podStartSLOduration=1.499418467 podStartE2EDuration="1.499418467s" podCreationTimestamp="2025-01-29 16:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:43.498929023 +0000 UTC m=+6.108425733" watchObservedRunningTime="2025-01-29 16:25:43.499418467 +0000 UTC m=+6.108915177" Jan 29 16:25:45.251945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471943489.mount: Deactivated successfully. Jan 29 16:25:45.810389 containerd[1509]: time="2025-01-29T16:25:45.810335785Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:45.840601 containerd[1509]: time="2025-01-29T16:25:45.840543371Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 16:25:45.859789 containerd[1509]: time="2025-01-29T16:25:45.859756469Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:45.901955 containerd[1509]: time="2025-01-29T16:25:45.901904207Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:45.902637 containerd[1509]: time="2025-01-29T16:25:45.902592460Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.463654215s" Jan 29 16:25:45.902637 containerd[1509]: time="2025-01-29T16:25:45.902629994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 16:25:45.904718 containerd[1509]: time="2025-01-29T16:25:45.904685326Z" level=info msg="CreateContainer within sandbox \"c46eab154ac3aa64e52908198a6f439c04be308cbcaf7d740a919670ceeb5757\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 16:25:45.929530 containerd[1509]: time="2025-01-29T16:25:45.929480431Z" level=info msg="CreateContainer within sandbox \"c46eab154ac3aa64e52908198a6f439c04be308cbcaf7d740a919670ceeb5757\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6b852deb15318b11818a62ce65ae81e2c8b8dbdd5cc55e169d762a7a1cd872e2\"" Jan 29 16:25:45.930022 containerd[1509]: time="2025-01-29T16:25:45.929987405Z" level=info msg="StartContainer for \"6b852deb15318b11818a62ce65ae81e2c8b8dbdd5cc55e169d762a7a1cd872e2\"" Jan 29 16:25:45.958609 systemd[1]: Started cri-containerd-6b852deb15318b11818a62ce65ae81e2c8b8dbdd5cc55e169d762a7a1cd872e2.scope - libcontainer container 6b852deb15318b11818a62ce65ae81e2c8b8dbdd5cc55e169d762a7a1cd872e2. Jan 29 16:25:46.095463 containerd[1509]: time="2025-01-29T16:25:46.095039148Z" level=info msg="StartContainer for \"6b852deb15318b11818a62ce65ae81e2c8b8dbdd5cc55e169d762a7a1cd872e2\" returns successfully" Jan 29 16:25:46.437121 kubelet[2623]: E0129 16:25:46.437011 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:46.496051 kubelet[2623]: E0129 16:25:46.496027 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:46.510575 kubelet[2623]: I0129 16:25:46.510453 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-xfzzc" podStartSLOduration=2.045087367 podStartE2EDuration="4.510431567s" podCreationTimestamp="2025-01-29 16:25:42 +0000 UTC" firstStartedPulling="2025-01-29 16:25:43.438244113 +0000 UTC m=+6.047740823" lastFinishedPulling="2025-01-29 16:25:45.903588313 +0000 UTC m=+8.513085023" observedRunningTime="2025-01-29 16:25:46.509652888 +0000 UTC m=+9.119149598" watchObservedRunningTime="2025-01-29 16:25:46.510431567 +0000 UTC m=+9.119928277" Jan 29 16:25:46.942003 update_engine[1495]: I20250129 16:25:46.941928 1495 update_attempter.cc:509] Updating boot flags... Jan 29 16:25:47.013480 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3014) Jan 29 16:25:47.064476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3015) Jan 29 16:25:47.103472 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3015) Jan 29 16:25:48.991496 kubelet[2623]: E0129 16:25:48.991462 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:49.164068 systemd[1]: Created slice kubepods-besteffort-pod3d094cb3_2234_4213_9dc0_86fa1b50c762.slice - libcontainer container kubepods-besteffort-pod3d094cb3_2234_4213_9dc0_86fa1b50c762.slice. Jan 29 16:25:49.231565 kubelet[2623]: I0129 16:25:49.231505 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d094cb3-2234-4213-9dc0-86fa1b50c762-typha-certs\") pod \"calico-typha-bb985f7fc-m6jt8\" (UID: \"3d094cb3-2234-4213-9dc0-86fa1b50c762\") " pod="calico-system/calico-typha-bb985f7fc-m6jt8" Jan 29 16:25:49.231565 kubelet[2623]: I0129 16:25:49.231543 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d094cb3-2234-4213-9dc0-86fa1b50c762-tigera-ca-bundle\") pod \"calico-typha-bb985f7fc-m6jt8\" (UID: \"3d094cb3-2234-4213-9dc0-86fa1b50c762\") " pod="calico-system/calico-typha-bb985f7fc-m6jt8" Jan 29 16:25:49.231565 kubelet[2623]: I0129 16:25:49.231563 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lpvs\" (UniqueName: \"kubernetes.io/projected/3d094cb3-2234-4213-9dc0-86fa1b50c762-kube-api-access-8lpvs\") pod \"calico-typha-bb985f7fc-m6jt8\" (UID: \"3d094cb3-2234-4213-9dc0-86fa1b50c762\") " pod="calico-system/calico-typha-bb985f7fc-m6jt8" Jan 29 16:25:49.246948 systemd[1]: Created slice kubepods-besteffort-podc409e3a3_2603_4c85_9275_7e75fe2ea937.slice - libcontainer container kubepods-besteffort-podc409e3a3_2603_4c85_9275_7e75fe2ea937.slice. Jan 29 16:25:49.331869 kubelet[2623]: I0129 16:25:49.331811 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z856k\" (UniqueName: \"kubernetes.io/projected/c409e3a3-2603-4c85-9275-7e75fe2ea937-kube-api-access-z856k\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332449 kubelet[2623]: I0129 16:25:49.332095 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-xtables-lock\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332449 kubelet[2623]: I0129 16:25:49.332144 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c409e3a3-2603-4c85-9275-7e75fe2ea937-tigera-ca-bundle\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332449 kubelet[2623]: I0129 16:25:49.332165 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-cni-log-dir\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332449 kubelet[2623]: I0129 16:25:49.332210 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-flexvol-driver-host\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332449 kubelet[2623]: I0129 16:25:49.332231 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-cni-bin-dir\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332635 kubelet[2623]: I0129 16:25:49.332249 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-cni-net-dir\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332635 kubelet[2623]: I0129 16:25:49.332272 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-lib-modules\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332635 kubelet[2623]: I0129 16:25:49.332288 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-policysync\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332635 kubelet[2623]: I0129 16:25:49.332305 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-var-run-calico\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332635 kubelet[2623]: I0129 16:25:49.332323 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c409e3a3-2603-4c85-9275-7e75fe2ea937-var-lib-calico\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.332892 kubelet[2623]: I0129 16:25:49.332356 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c409e3a3-2603-4c85-9275-7e75fe2ea937-node-certs\") pod \"calico-node-tg9rm\" (UID: \"c409e3a3-2603-4c85-9275-7e75fe2ea937\") " pod="calico-system/calico-node-tg9rm" Jan 29 16:25:49.353622 kubelet[2623]: E0129 16:25:49.353384 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:25:49.433453 kubelet[2623]: I0129 16:25:49.433371 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cc09215-26d9-4b38-816c-abf4c3c659ad-kubelet-dir\") pod \"csi-node-driver-mjx6x\" (UID: \"9cc09215-26d9-4b38-816c-abf4c3c659ad\") " pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:25:49.433453 kubelet[2623]: I0129 16:25:49.433435 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg7wf\" (UniqueName: \"kubernetes.io/projected/9cc09215-26d9-4b38-816c-abf4c3c659ad-kube-api-access-vg7wf\") pod \"csi-node-driver-mjx6x\" (UID: \"9cc09215-26d9-4b38-816c-abf4c3c659ad\") " pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:25:49.433933 kubelet[2623]: I0129 16:25:49.433507 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9cc09215-26d9-4b38-816c-abf4c3c659ad-varrun\") pod \"csi-node-driver-mjx6x\" (UID: \"9cc09215-26d9-4b38-816c-abf4c3c659ad\") " pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:25:49.433933 kubelet[2623]: I0129 16:25:49.433530 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cc09215-26d9-4b38-816c-abf4c3c659ad-socket-dir\") pod \"csi-node-driver-mjx6x\" (UID: \"9cc09215-26d9-4b38-816c-abf4c3c659ad\") " pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:25:49.433933 kubelet[2623]: I0129 16:25:49.433574 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cc09215-26d9-4b38-816c-abf4c3c659ad-registration-dir\") pod \"csi-node-driver-mjx6x\" (UID: \"9cc09215-26d9-4b38-816c-abf4c3c659ad\") " pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:25:49.435349 kubelet[2623]: E0129 16:25:49.435309 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.435349 kubelet[2623]: W0129 16:25:49.435336 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.435518 kubelet[2623]: E0129 16:25:49.435389 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.435701 kubelet[2623]: E0129 16:25:49.435686 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.435701 kubelet[2623]: W0129 16:25:49.435698 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.435871 kubelet[2623]: E0129 16:25:49.435713 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.435952 kubelet[2623]: E0129 16:25:49.435905 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.435952 kubelet[2623]: W0129 16:25:49.435912 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.435952 kubelet[2623]: E0129 16:25:49.435921 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.436125 kubelet[2623]: E0129 16:25:49.436070 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.436125 kubelet[2623]: W0129 16:25:49.436077 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.436271 kubelet[2623]: E0129 16:25:49.436223 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.436382 kubelet[2623]: E0129 16:25:49.436368 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.436382 kubelet[2623]: W0129 16:25:49.436379 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.436538 kubelet[2623]: E0129 16:25:49.436508 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.436740 kubelet[2623]: E0129 16:25:49.436719 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.436740 kubelet[2623]: W0129 16:25:49.436731 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.436967 kubelet[2623]: E0129 16:25:49.436830 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.437193 kubelet[2623]: E0129 16:25:49.437179 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.437193 kubelet[2623]: W0129 16:25:49.437190 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.437308 kubelet[2623]: E0129 16:25:49.437278 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.437540 kubelet[2623]: E0129 16:25:49.437501 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.437540 kubelet[2623]: W0129 16:25:49.437513 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.437709 kubelet[2623]: E0129 16:25:49.437654 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.438284 kubelet[2623]: E0129 16:25:49.438097 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.438284 kubelet[2623]: W0129 16:25:49.438111 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.438284 kubelet[2623]: E0129 16:25:49.438121 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.440212 kubelet[2623]: E0129 16:25:49.440160 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.440212 kubelet[2623]: W0129 16:25:49.440207 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.440297 kubelet[2623]: E0129 16:25:49.440223 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.449233 kubelet[2623]: E0129 16:25:49.449128 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.449233 kubelet[2623]: W0129 16:25:49.449152 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.449233 kubelet[2623]: E0129 16:25:49.449171 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.470416 kubelet[2623]: E0129 16:25:49.470371 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:49.471183 containerd[1509]: time="2025-01-29T16:25:49.471117416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb985f7fc-m6jt8,Uid:3d094cb3-2234-4213-9dc0-86fa1b50c762,Namespace:calico-system,Attempt:0,}" Jan 29 16:25:49.505884 containerd[1509]: time="2025-01-29T16:25:49.505456639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:49.505884 containerd[1509]: time="2025-01-29T16:25:49.505534602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:49.505884 containerd[1509]: time="2025-01-29T16:25:49.505548008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:49.505884 containerd[1509]: time="2025-01-29T16:25:49.505630841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:49.532652 systemd[1]: Started cri-containerd-7960185e160d3330d6ccf488e0242ecff6f261551b15501c58ab6668a2907380.scope - libcontainer container 7960185e160d3330d6ccf488e0242ecff6f261551b15501c58ab6668a2907380. Jan 29 16:25:49.534548 kubelet[2623]: E0129 16:25:49.534517 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.534548 kubelet[2623]: W0129 16:25:49.534536 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.534726 kubelet[2623]: E0129 16:25:49.534557 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.534836 kubelet[2623]: E0129 16:25:49.534800 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.534836 kubelet[2623]: W0129 16:25:49.534817 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.534836 kubelet[2623]: E0129 16:25:49.534829 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.535554 kubelet[2623]: E0129 16:25:49.535538 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.535554 kubelet[2623]: W0129 16:25:49.535551 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.535623 kubelet[2623]: E0129 16:25:49.535569 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.536133 kubelet[2623]: E0129 16:25:49.535982 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.536133 kubelet[2623]: W0129 16:25:49.536006 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.536133 kubelet[2623]: E0129 16:25:49.536031 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.536493 kubelet[2623]: E0129 16:25:49.536333 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.536493 kubelet[2623]: W0129 16:25:49.536344 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.536493 kubelet[2623]: E0129 16:25:49.536355 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.536639 kubelet[2623]: E0129 16:25:49.536628 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.536689 kubelet[2623]: W0129 16:25:49.536678 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.536765 kubelet[2623]: E0129 16:25:49.536734 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.537170 kubelet[2623]: E0129 16:25:49.537109 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.537170 kubelet[2623]: W0129 16:25:49.537142 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.537261 kubelet[2623]: E0129 16:25:49.537171 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.537585 kubelet[2623]: E0129 16:25:49.537549 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.537585 kubelet[2623]: W0129 16:25:49.537572 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.537758 kubelet[2623]: E0129 16:25:49.537620 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.538095 kubelet[2623]: E0129 16:25:49.538073 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.538869 kubelet[2623]: W0129 16:25:49.538723 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.538869 kubelet[2623]: E0129 16:25:49.538789 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.539113 kubelet[2623]: E0129 16:25:49.539094 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.539113 kubelet[2623]: W0129 16:25:49.539107 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.539241 kubelet[2623]: E0129 16:25:49.539221 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.539331 kubelet[2623]: E0129 16:25:49.539317 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.539331 kubelet[2623]: W0129 16:25:49.539328 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.539455 kubelet[2623]: E0129 16:25:49.539436 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.539688 kubelet[2623]: E0129 16:25:49.539673 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.539938 kubelet[2623]: W0129 16:25:49.539845 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.539999 kubelet[2623]: E0129 16:25:49.539986 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.540107 kubelet[2623]: E0129 16:25:49.540085 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.540107 kubelet[2623]: W0129 16:25:49.540095 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.540340 kubelet[2623]: E0129 16:25:49.540251 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.540491 kubelet[2623]: E0129 16:25:49.540480 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.540592 kubelet[2623]: W0129 16:25:49.540538 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.540657 kubelet[2623]: E0129 16:25:49.540623 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.540995 kubelet[2623]: E0129 16:25:49.540903 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.540995 kubelet[2623]: W0129 16:25:49.540914 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.541540 kubelet[2623]: E0129 16:25:49.541419 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.541540 kubelet[2623]: E0129 16:25:49.541517 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.541540 kubelet[2623]: W0129 16:25:49.541525 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.541922 kubelet[2623]: E0129 16:25:49.541803 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.542080 kubelet[2623]: E0129 16:25:49.542070 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.542172 kubelet[2623]: W0129 16:25:49.542118 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.542172 kubelet[2623]: E0129 16:25:49.542136 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.542476 kubelet[2623]: E0129 16:25:49.542376 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.542476 kubelet[2623]: W0129 16:25:49.542385 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.542597 kubelet[2623]: E0129 16:25:49.542551 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.542681 kubelet[2623]: E0129 16:25:49.542672 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.542770 kubelet[2623]: W0129 16:25:49.542721 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.542950 kubelet[2623]: E0129 16:25:49.542829 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.543172 kubelet[2623]: E0129 16:25:49.543152 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.543274 kubelet[2623]: W0129 16:25:49.543222 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.543505 kubelet[2623]: E0129 16:25:49.543352 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.543694 kubelet[2623]: E0129 16:25:49.543682 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.543870 kubelet[2623]: W0129 16:25:49.543821 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.544000 kubelet[2623]: E0129 16:25:49.543938 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.544649 kubelet[2623]: E0129 16:25:49.544527 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.544649 kubelet[2623]: W0129 16:25:49.544539 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.544866 kubelet[2623]: E0129 16:25:49.544831 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.545137 kubelet[2623]: E0129 16:25:49.545116 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.545190 kubelet[2623]: W0129 16:25:49.545179 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.545304 kubelet[2623]: E0129 16:25:49.545281 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.545719 kubelet[2623]: E0129 16:25:49.545629 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.545719 kubelet[2623]: W0129 16:25:49.545639 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.545800 kubelet[2623]: E0129 16:25:49.545754 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.546204 kubelet[2623]: E0129 16:25:49.546158 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.546204 kubelet[2623]: W0129 16:25:49.546170 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.546204 kubelet[2623]: E0129 16:25:49.546179 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.548289 kubelet[2623]: E0129 16:25:49.548262 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:49.548289 kubelet[2623]: W0129 16:25:49.548284 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:49.548428 kubelet[2623]: E0129 16:25:49.548303 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:49.551509 kubelet[2623]: E0129 16:25:49.551251 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:49.551892 containerd[1509]: time="2025-01-29T16:25:49.551857644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tg9rm,Uid:c409e3a3-2603-4c85-9275-7e75fe2ea937,Namespace:calico-system,Attempt:0,}" Jan 29 16:25:49.574204 containerd[1509]: time="2025-01-29T16:25:49.574149384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb985f7fc-m6jt8,Uid:3d094cb3-2234-4213-9dc0-86fa1b50c762,Namespace:calico-system,Attempt:0,} returns sandbox id \"7960185e160d3330d6ccf488e0242ecff6f261551b15501c58ab6668a2907380\"" Jan 29 16:25:49.575031 kubelet[2623]: E0129 16:25:49.574890 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:49.575839 containerd[1509]: time="2025-01-29T16:25:49.575811183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 16:25:49.708573 containerd[1509]: time="2025-01-29T16:25:49.708474208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:49.708573 containerd[1509]: time="2025-01-29T16:25:49.708523005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:49.708573 containerd[1509]: time="2025-01-29T16:25:49.708533085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:49.708815 containerd[1509]: time="2025-01-29T16:25:49.708604685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:49.726536 systemd[1]: Started cri-containerd-2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579.scope - libcontainer container 2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579. Jan 29 16:25:49.748447 containerd[1509]: time="2025-01-29T16:25:49.748390643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tg9rm,Uid:c409e3a3-2603-4c85-9275-7e75fe2ea937,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579\"" Jan 29 16:25:49.748977 kubelet[2623]: E0129 16:25:49.748958 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:51.063642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323712902.mount: Deactivated successfully. Jan 29 16:25:51.471107 kubelet[2623]: E0129 16:25:51.470944 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:25:51.777586 kubelet[2623]: E0129 16:25:51.777553 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:51.843323 kubelet[2623]: E0129 16:25:51.843045 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.843323 kubelet[2623]: W0129 16:25:51.843069 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.843323 kubelet[2623]: E0129 16:25:51.843093 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.843323 kubelet[2623]: E0129 16:25:51.843302 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.843323 kubelet[2623]: W0129 16:25:51.843311 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.843323 kubelet[2623]: E0129 16:25:51.843323 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.843678 kubelet[2623]: E0129 16:25:51.843533 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.843678 kubelet[2623]: W0129 16:25:51.843542 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.843678 kubelet[2623]: E0129 16:25:51.843552 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.843762 kubelet[2623]: E0129 16:25:51.843743 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.843762 kubelet[2623]: W0129 16:25:51.843758 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.843819 kubelet[2623]: E0129 16:25:51.843768 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.844013 kubelet[2623]: E0129 16:25:51.843983 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.844013 kubelet[2623]: W0129 16:25:51.844000 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.844013 kubelet[2623]: E0129 16:25:51.844009 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.844205 kubelet[2623]: E0129 16:25:51.844180 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.844205 kubelet[2623]: W0129 16:25:51.844197 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.844290 kubelet[2623]: E0129 16:25:51.844207 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.844416 kubelet[2623]: E0129 16:25:51.844381 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.844416 kubelet[2623]: W0129 16:25:51.844412 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.844480 kubelet[2623]: E0129 16:25:51.844422 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.846518 kubelet[2623]: E0129 16:25:51.846482 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.846518 kubelet[2623]: W0129 16:25:51.846503 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.846518 kubelet[2623]: E0129 16:25:51.846517 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.847803 kubelet[2623]: E0129 16:25:51.847711 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.847803 kubelet[2623]: W0129 16:25:51.847728 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.847803 kubelet[2623]: E0129 16:25:51.847739 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.848750 kubelet[2623]: E0129 16:25:51.848600 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.848750 kubelet[2623]: W0129 16:25:51.848617 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.848750 kubelet[2623]: E0129 16:25:51.848628 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.848857 kubelet[2623]: E0129 16:25:51.848835 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.848857 kubelet[2623]: W0129 16:25:51.848845 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.848857 kubelet[2623]: E0129 16:25:51.848855 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.851658 kubelet[2623]: E0129 16:25:51.851598 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.851658 kubelet[2623]: W0129 16:25:51.851621 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.851658 kubelet[2623]: E0129 16:25:51.851639 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.852022 kubelet[2623]: E0129 16:25:51.851877 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.852022 kubelet[2623]: W0129 16:25:51.851892 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.852022 kubelet[2623]: E0129 16:25:51.851915 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.852255 kubelet[2623]: E0129 16:25:51.852119 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.852255 kubelet[2623]: W0129 16:25:51.852129 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.852255 kubelet[2623]: E0129 16:25:51.852139 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:51.852732 kubelet[2623]: E0129 16:25:51.852336 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:51.852732 kubelet[2623]: W0129 16:25:51.852345 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:51.852732 kubelet[2623]: E0129 16:25:51.852355 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:53.277950 containerd[1509]: time="2025-01-29T16:25:53.277903768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:53.371299 containerd[1509]: time="2025-01-29T16:25:53.371218716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 16:25:53.397351 containerd[1509]: time="2025-01-29T16:25:53.397264725Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:53.425008 containerd[1509]: time="2025-01-29T16:25:53.424958552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:53.430036 containerd[1509]: time="2025-01-29T16:25:53.429996992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.854150098s" Jan 29 16:25:53.430036 containerd[1509]: time="2025-01-29T16:25:53.430031839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 16:25:53.431506 containerd[1509]: time="2025-01-29T16:25:53.431468226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:25:53.440098 containerd[1509]: time="2025-01-29T16:25:53.438844527Z" level=info msg="CreateContainer within sandbox \"7960185e160d3330d6ccf488e0242ecff6f261551b15501c58ab6668a2907380\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 16:25:53.471674 kubelet[2623]: E0129 16:25:53.471620 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:25:54.400821 containerd[1509]: time="2025-01-29T16:25:54.400769978Z" level=info msg="CreateContainer within sandbox \"7960185e160d3330d6ccf488e0242ecff6f261551b15501c58ab6668a2907380\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c1008691367cdd60f7f25049664be51b96508b168228b5e234e390d54e800b35\"" Jan 29 16:25:54.401330 containerd[1509]: time="2025-01-29T16:25:54.401252616Z" level=info msg="StartContainer for \"c1008691367cdd60f7f25049664be51b96508b168228b5e234e390d54e800b35\"" Jan 29 16:25:54.437599 systemd[1]: Started cri-containerd-c1008691367cdd60f7f25049664be51b96508b168228b5e234e390d54e800b35.scope - libcontainer container c1008691367cdd60f7f25049664be51b96508b168228b5e234e390d54e800b35. Jan 29 16:25:54.544448 containerd[1509]: time="2025-01-29T16:25:54.544335339Z" level=info msg="StartContainer for \"c1008691367cdd60f7f25049664be51b96508b168228b5e234e390d54e800b35\" returns successfully" Jan 29 16:25:55.471387 kubelet[2623]: E0129 16:25:55.471322 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:25:55.549663 kubelet[2623]: E0129 16:25:55.549632 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:55.580717 kubelet[2623]: E0129 16:25:55.580634 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.580717 kubelet[2623]: W0129 16:25:55.580666 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.580717 kubelet[2623]: E0129 16:25:55.580690 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.580994 kubelet[2623]: E0129 16:25:55.580967 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.581029 kubelet[2623]: W0129 16:25:55.580995 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.581029 kubelet[2623]: E0129 16:25:55.581022 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.581386 kubelet[2623]: E0129 16:25:55.581351 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.581386 kubelet[2623]: W0129 16:25:55.581379 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.581484 kubelet[2623]: E0129 16:25:55.581411 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.581662 kubelet[2623]: E0129 16:25:55.581637 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.581662 kubelet[2623]: W0129 16:25:55.581652 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.581716 kubelet[2623]: E0129 16:25:55.581664 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.581891 kubelet[2623]: E0129 16:25:55.581876 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.581891 kubelet[2623]: W0129 16:25:55.581889 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.581941 kubelet[2623]: E0129 16:25:55.581899 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.582114 kubelet[2623]: E0129 16:25:55.582099 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.582114 kubelet[2623]: W0129 16:25:55.582111 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.582161 kubelet[2623]: E0129 16:25:55.582121 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.582389 kubelet[2623]: E0129 16:25:55.582372 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.582389 kubelet[2623]: W0129 16:25:55.582385 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.582480 kubelet[2623]: E0129 16:25:55.582417 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.582671 kubelet[2623]: E0129 16:25:55.582655 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.582671 kubelet[2623]: W0129 16:25:55.582668 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.582733 kubelet[2623]: E0129 16:25:55.582678 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.582952 kubelet[2623]: E0129 16:25:55.582925 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.582952 kubelet[2623]: W0129 16:25:55.582940 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.582952 kubelet[2623]: E0129 16:25:55.582950 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.583244 kubelet[2623]: E0129 16:25:55.583227 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.583244 kubelet[2623]: W0129 16:25:55.583242 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.583294 kubelet[2623]: E0129 16:25:55.583252 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.583488 kubelet[2623]: E0129 16:25:55.583472 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.583488 kubelet[2623]: W0129 16:25:55.583485 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.583616 kubelet[2623]: E0129 16:25:55.583495 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.583732 kubelet[2623]: E0129 16:25:55.583716 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.583755 kubelet[2623]: W0129 16:25:55.583731 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.583755 kubelet[2623]: E0129 16:25:55.583741 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.583988 kubelet[2623]: E0129 16:25:55.583972 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.583988 kubelet[2623]: W0129 16:25:55.583984 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.584052 kubelet[2623]: E0129 16:25:55.583994 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.584213 kubelet[2623]: E0129 16:25:55.584198 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.584213 kubelet[2623]: W0129 16:25:55.584210 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.584266 kubelet[2623]: E0129 16:25:55.584220 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.584473 kubelet[2623]: E0129 16:25:55.584452 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.584473 kubelet[2623]: W0129 16:25:55.584466 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.584549 kubelet[2623]: E0129 16:25:55.584475 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.636722 kubelet[2623]: I0129 16:25:55.636274 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bb985f7fc-m6jt8" podStartSLOduration=2.780568412 podStartE2EDuration="6.636259225s" podCreationTimestamp="2025-01-29 16:25:49 +0000 UTC" firstStartedPulling="2025-01-29 16:25:49.575533759 +0000 UTC m=+12.185030469" lastFinishedPulling="2025-01-29 16:25:53.431224562 +0000 UTC m=+16.040721282" observedRunningTime="2025-01-29 16:25:55.635905999 +0000 UTC m=+18.245402720" watchObservedRunningTime="2025-01-29 16:25:55.636259225 +0000 UTC m=+18.245755935" Jan 29 16:25:55.681102 kubelet[2623]: E0129 16:25:55.681048 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.681102 kubelet[2623]: W0129 16:25:55.681082 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.681102 kubelet[2623]: E0129 16:25:55.681106 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.681369 kubelet[2623]: E0129 16:25:55.681354 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.681369 kubelet[2623]: W0129 16:25:55.681367 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.681496 kubelet[2623]: E0129 16:25:55.681383 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.681668 kubelet[2623]: E0129 16:25:55.681652 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.681668 kubelet[2623]: W0129 16:25:55.681667 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.681754 kubelet[2623]: E0129 16:25:55.681682 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.681916 kubelet[2623]: E0129 16:25:55.681897 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.681916 kubelet[2623]: W0129 16:25:55.681912 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.682027 kubelet[2623]: E0129 16:25:55.681927 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.682167 kubelet[2623]: E0129 16:25:55.682152 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.682167 kubelet[2623]: W0129 16:25:55.682164 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.682250 kubelet[2623]: E0129 16:25:55.682179 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.682456 kubelet[2623]: E0129 16:25:55.682441 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.682456 kubelet[2623]: W0129 16:25:55.682453 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.682552 kubelet[2623]: E0129 16:25:55.682468 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.682983 kubelet[2623]: E0129 16:25:55.682969 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.682983 kubelet[2623]: W0129 16:25:55.682981 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.683173 kubelet[2623]: E0129 16:25:55.683011 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.683214 kubelet[2623]: E0129 16:25:55.683197 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.683214 kubelet[2623]: W0129 16:25:55.683208 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.683326 kubelet[2623]: E0129 16:25:55.683270 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.683479 kubelet[2623]: E0129 16:25:55.683463 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.683479 kubelet[2623]: W0129 16:25:55.683476 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.683569 kubelet[2623]: E0129 16:25:55.683492 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.683820 kubelet[2623]: E0129 16:25:55.683710 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.683820 kubelet[2623]: W0129 16:25:55.683722 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.683820 kubelet[2623]: E0129 16:25:55.683733 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.684107 kubelet[2623]: E0129 16:25:55.684088 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.684107 kubelet[2623]: W0129 16:25:55.684101 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.684180 kubelet[2623]: E0129 16:25:55.684136 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.684384 kubelet[2623]: E0129 16:25:55.684358 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.684384 kubelet[2623]: W0129 16:25:55.684370 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.684384 kubelet[2623]: E0129 16:25:55.684383 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.684634 kubelet[2623]: E0129 16:25:55.684607 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.684634 kubelet[2623]: W0129 16:25:55.684619 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.684634 kubelet[2623]: E0129 16:25:55.684633 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.684990 kubelet[2623]: E0129 16:25:55.684973 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.684990 kubelet[2623]: W0129 16:25:55.684986 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.685090 kubelet[2623]: E0129 16:25:55.685002 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.685234 kubelet[2623]: E0129 16:25:55.685220 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.685234 kubelet[2623]: W0129 16:25:55.685232 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.685304 kubelet[2623]: E0129 16:25:55.685246 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.685485 kubelet[2623]: E0129 16:25:55.685472 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.685485 kubelet[2623]: W0129 16:25:55.685481 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.685577 kubelet[2623]: E0129 16:25:55.685493 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.685724 kubelet[2623]: E0129 16:25:55.685708 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.685724 kubelet[2623]: W0129 16:25:55.685719 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.685795 kubelet[2623]: E0129 16:25:55.685734 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.685944 kubelet[2623]: E0129 16:25:55.685929 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:25:55.685944 kubelet[2623]: W0129 16:25:55.685941 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:25:55.686018 kubelet[2623]: E0129 16:25:55.685950 2623 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:25:55.942673 containerd[1509]: time="2025-01-29T16:25:55.942609724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:55.943831 containerd[1509]: time="2025-01-29T16:25:55.943782199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 16:25:55.945181 containerd[1509]: time="2025-01-29T16:25:55.945148642Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:55.947465 containerd[1509]: time="2025-01-29T16:25:55.947428546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:55.948344 containerd[1509]: time="2025-01-29T16:25:55.948288896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.516789288s" Jan 29 16:25:55.948422 containerd[1509]: time="2025-01-29T16:25:55.948341718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 16:25:55.949843 containerd[1509]: time="2025-01-29T16:25:55.949805509Z" level=info msg="CreateContainer within sandbox \"2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:25:55.965027 containerd[1509]: time="2025-01-29T16:25:55.964979590Z" level=info msg="CreateContainer within sandbox \"2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088\"" Jan 29 16:25:55.965610 containerd[1509]: time="2025-01-29T16:25:55.965371360Z" level=info msg="StartContainer for \"4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088\"" Jan 29 16:25:55.988863 systemd[1]: run-containerd-runc-k8s.io-4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088-runc.9qt6Cd.mount: Deactivated successfully. Jan 29 16:25:56.000526 systemd[1]: Started cri-containerd-4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088.scope - libcontainer container 4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088. Jan 29 16:25:56.032150 containerd[1509]: time="2025-01-29T16:25:56.032049886Z" level=info msg="StartContainer for \"4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088\" returns successfully" Jan 29 16:25:56.044988 systemd[1]: cri-containerd-4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088.scope: Deactivated successfully. Jan 29 16:25:56.364899 containerd[1509]: time="2025-01-29T16:25:56.364822058Z" level=info msg="shim disconnected" id=4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088 namespace=k8s.io Jan 29 16:25:56.364899 containerd[1509]: time="2025-01-29T16:25:56.364877155Z" level=warning msg="cleaning up after shim disconnected" id=4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088 namespace=k8s.io Jan 29 16:25:56.364899 containerd[1509]: time="2025-01-29T16:25:56.364885712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:56.550951 kubelet[2623]: E0129 16:25:56.550920 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:56.557183 kubelet[2623]: E0129 16:25:56.557146 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:56.558151 containerd[1509]: time="2025-01-29T16:25:56.558085389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:25:56.961276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dfdc6345733eafca5ffdddb2c772f91277353f3ab9a49877cf5d8a66f158088-rootfs.mount: Deactivated successfully. Jan 29 16:25:57.471902 kubelet[2623]: E0129 16:25:57.471852 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:25:59.533413 kubelet[2623]: E0129 16:25:59.533335 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:00.690537 containerd[1509]: time="2025-01-29T16:26:00.690485578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:00.721457 containerd[1509]: time="2025-01-29T16:26:00.721426182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 16:26:00.756007 containerd[1509]: time="2025-01-29T16:26:00.755966309Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:00.797372 containerd[1509]: time="2025-01-29T16:26:00.797335848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:00.798135 containerd[1509]: time="2025-01-29T16:26:00.798110252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.239967541s" Jan 29 16:26:00.798182 containerd[1509]: time="2025-01-29T16:26:00.798143555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 16:26:00.801114 containerd[1509]: time="2025-01-29T16:26:00.801086211Z" level=info msg="CreateContainer within sandbox \"2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:26:01.471110 kubelet[2623]: E0129 16:26:01.471054 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:01.710041 containerd[1509]: time="2025-01-29T16:26:01.709993476Z" level=info msg="CreateContainer within sandbox \"2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd\"" Jan 29 16:26:01.710503 containerd[1509]: time="2025-01-29T16:26:01.710447852Z" level=info msg="StartContainer for \"5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd\"" Jan 29 16:26:01.743540 systemd[1]: Started cri-containerd-5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd.scope - libcontainer container 5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd. Jan 29 16:26:02.152354 containerd[1509]: time="2025-01-29T16:26:02.152202381Z" level=info msg="StartContainer for \"5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd\" returns successfully" Jan 29 16:26:03.157991 kubelet[2623]: E0129 16:26:03.157447 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:03.471665 kubelet[2623]: E0129 16:26:03.471503 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:03.745048 systemd[1]: cri-containerd-5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd.scope: Deactivated successfully. Jan 29 16:26:03.745393 systemd[1]: cri-containerd-5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd.scope: Consumed 554ms CPU time, 158.6M memory peak, 4K read from disk, 151M written to disk. Jan 29 16:26:03.765746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd-rootfs.mount: Deactivated successfully. Jan 29 16:26:03.770366 containerd[1509]: time="2025-01-29T16:26:03.770312023Z" level=info msg="shim disconnected" id=5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd namespace=k8s.io Jan 29 16:26:03.770711 containerd[1509]: time="2025-01-29T16:26:03.770367040Z" level=warning msg="cleaning up after shim disconnected" id=5ff02df55c183849c66e0ce6a5045921aa71ea57ed95a56ab7de3be60fe237cd namespace=k8s.io Jan 29 16:26:03.770711 containerd[1509]: time="2025-01-29T16:26:03.770376347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:03.832845 kubelet[2623]: I0129 16:26:03.832820 2623 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:26:03.861782 systemd[1]: Created slice kubepods-burstable-pod5b88c73e_075c_4156_a283_4de15bccf36a.slice - libcontainer container kubepods-burstable-pod5b88c73e_075c_4156_a283_4de15bccf36a.slice. Jan 29 16:26:03.869365 systemd[1]: Created slice kubepods-burstable-podd39cd512_c288_44a0_b875_c359ef74dd3f.slice - libcontainer container kubepods-burstable-podd39cd512_c288_44a0_b875_c359ef74dd3f.slice. Jan 29 16:26:03.876177 systemd[1]: Created slice kubepods-besteffort-poddb40db35_a526_4e56_80d1_8bc8cd956a1c.slice - libcontainer container kubepods-besteffort-poddb40db35_a526_4e56_80d1_8bc8cd956a1c.slice. Jan 29 16:26:03.882148 systemd[1]: Created slice kubepods-besteffort-pod5f804998_68b0_408c_beb0_2887c4ad4908.slice - libcontainer container kubepods-besteffort-pod5f804998_68b0_408c_beb0_2887c4ad4908.slice. Jan 29 16:26:03.888487 systemd[1]: Created slice kubepods-besteffort-podd3ce4654_53de_4d6a_8744_f657f07eba4f.slice - libcontainer container kubepods-besteffort-podd3ce4654_53de_4d6a_8744_f657f07eba4f.slice. Jan 29 16:26:04.038984 kubelet[2623]: I0129 16:26:04.038871 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db40db35-a526-4e56-80d1-8bc8cd956a1c-tigera-ca-bundle\") pod \"calico-kube-controllers-66b4c55cd5-pmg6b\" (UID: \"db40db35-a526-4e56-80d1-8bc8cd956a1c\") " pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:04.039151 kubelet[2623]: I0129 16:26:04.038992 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f804998-68b0-408c-beb0-2887c4ad4908-calico-apiserver-certs\") pod \"calico-apiserver-5947747589-n86vd\" (UID: \"5f804998-68b0-408c-beb0-2887c4ad4908\") " pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:04.039151 kubelet[2623]: I0129 16:26:04.039015 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b88c73e-075c-4156-a283-4de15bccf36a-config-volume\") pod \"coredns-6f6b679f8f-llv2c\" (UID: \"5b88c73e-075c-4156-a283-4de15bccf36a\") " pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:04.039151 kubelet[2623]: I0129 16:26:04.039035 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d39cd512-c288-44a0-b875-c359ef74dd3f-config-volume\") pod \"coredns-6f6b679f8f-6kc2r\" (UID: \"d39cd512-c288-44a0-b875-c359ef74dd3f\") " pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:04.039151 kubelet[2623]: I0129 16:26:04.039053 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk5z\" (UniqueName: \"kubernetes.io/projected/5b88c73e-075c-4156-a283-4de15bccf36a-kube-api-access-2fk5z\") pod \"coredns-6f6b679f8f-llv2c\" (UID: \"5b88c73e-075c-4156-a283-4de15bccf36a\") " pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:04.039151 kubelet[2623]: I0129 16:26:04.039087 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzhv\" (UniqueName: \"kubernetes.io/projected/d3ce4654-53de-4d6a-8744-f657f07eba4f-kube-api-access-6pzhv\") pod \"calico-apiserver-5947747589-tzh8v\" (UID: \"d3ce4654-53de-4d6a-8744-f657f07eba4f\") " pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:04.039339 kubelet[2623]: I0129 16:26:04.039103 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mftm8\" (UniqueName: \"kubernetes.io/projected/d39cd512-c288-44a0-b875-c359ef74dd3f-kube-api-access-mftm8\") pod \"coredns-6f6b679f8f-6kc2r\" (UID: \"d39cd512-c288-44a0-b875-c359ef74dd3f\") " pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:04.039339 kubelet[2623]: I0129 16:26:04.039122 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3ce4654-53de-4d6a-8744-f657f07eba4f-calico-apiserver-certs\") pod \"calico-apiserver-5947747589-tzh8v\" (UID: \"d3ce4654-53de-4d6a-8744-f657f07eba4f\") " pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:04.039339 kubelet[2623]: I0129 16:26:04.039136 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrhc\" (UniqueName: \"kubernetes.io/projected/5f804998-68b0-408c-beb0-2887c4ad4908-kube-api-access-hnrhc\") pod \"calico-apiserver-5947747589-n86vd\" (UID: \"5f804998-68b0-408c-beb0-2887c4ad4908\") " pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:04.039339 kubelet[2623]: I0129 16:26:04.039151 2623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq9ph\" (UniqueName: \"kubernetes.io/projected/db40db35-a526-4e56-80d1-8bc8cd956a1c-kube-api-access-pq9ph\") pod \"calico-kube-controllers-66b4c55cd5-pmg6b\" (UID: \"db40db35-a526-4e56-80d1-8bc8cd956a1c\") " pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:04.161824 kubelet[2623]: E0129 16:26:04.161793 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:04.164037 containerd[1509]: time="2025-01-29T16:26:04.163986815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:26:04.166350 kubelet[2623]: E0129 16:26:04.166297 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:04.166938 containerd[1509]: time="2025-01-29T16:26:04.166905283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:04.173541 kubelet[2623]: E0129 16:26:04.173477 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:04.174091 containerd[1509]: time="2025-01-29T16:26:04.173785182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:04.179868 containerd[1509]: time="2025-01-29T16:26:04.179812673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:04.185600 containerd[1509]: time="2025-01-29T16:26:04.185578211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:0,}" Jan 29 16:26:04.191998 containerd[1509]: time="2025-01-29T16:26:04.191955365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:0,}" Jan 29 16:26:04.294253 containerd[1509]: time="2025-01-29T16:26:04.293807093Z" level=error msg="Failed to destroy network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.294390 containerd[1509]: time="2025-01-29T16:26:04.294296725Z" level=error msg="encountered an error cleaning up failed sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.294474 containerd[1509]: time="2025-01-29T16:26:04.294391677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.294709 kubelet[2623]: E0129 16:26:04.294662 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.294778 kubelet[2623]: E0129 16:26:04.294730 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:04.294778 kubelet[2623]: E0129 16:26:04.294750 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:04.294851 kubelet[2623]: E0129 16:26:04.294795 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-llv2c" podUID="5b88c73e-075c-4156-a283-4de15bccf36a" Jan 29 16:26:04.297134 containerd[1509]: time="2025-01-29T16:26:04.297072658Z" level=error msg="Failed to destroy network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.298074 containerd[1509]: time="2025-01-29T16:26:04.298038655Z" level=error msg="encountered an error cleaning up failed sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.298141 containerd[1509]: time="2025-01-29T16:26:04.298090544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.298569 kubelet[2623]: E0129 16:26:04.298345 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.298569 kubelet[2623]: E0129 16:26:04.298427 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:04.298569 kubelet[2623]: E0129 16:26:04.298451 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:04.298731 kubelet[2623]: E0129 16:26:04.298494 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6kc2r" podUID="d39cd512-c288-44a0-b875-c359ef74dd3f" Jan 29 16:26:04.305995 containerd[1509]: time="2025-01-29T16:26:04.305941859Z" level=error msg="Failed to destroy network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.306354 containerd[1509]: time="2025-01-29T16:26:04.306326188Z" level=error msg="encountered an error cleaning up failed sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.306451 containerd[1509]: time="2025-01-29T16:26:04.306379230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.306758 kubelet[2623]: E0129 16:26:04.306684 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.306821 kubelet[2623]: E0129 16:26:04.306780 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:04.306874 kubelet[2623]: E0129 16:26:04.306818 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:04.306919 kubelet[2623]: E0129 16:26:04.306893 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" podUID="db40db35-a526-4e56-80d1-8bc8cd956a1c" Jan 29 16:26:04.308108 containerd[1509]: time="2025-01-29T16:26:04.308032928Z" level=error msg="Failed to destroy network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.308886 containerd[1509]: time="2025-01-29T16:26:04.308744886Z" level=error msg="encountered an error cleaning up failed sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.309020 containerd[1509]: time="2025-01-29T16:26:04.308992682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.309404 kubelet[2623]: E0129 16:26:04.309354 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.309456 kubelet[2623]: E0129 16:26:04.309413 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:04.309456 kubelet[2623]: E0129 16:26:04.309429 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:04.309541 kubelet[2623]: E0129 16:26:04.309457 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" podUID="d3ce4654-53de-4d6a-8744-f657f07eba4f" Jan 29 16:26:04.313502 containerd[1509]: time="2025-01-29T16:26:04.313461147Z" level=error msg="Failed to destroy network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.313858 containerd[1509]: time="2025-01-29T16:26:04.313831809Z" level=error msg="encountered an error cleaning up failed sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.313903 containerd[1509]: time="2025-01-29T16:26:04.313887366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.314125 kubelet[2623]: E0129 16:26:04.314083 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:04.314191 kubelet[2623]: E0129 16:26:04.314146 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:04.314191 kubelet[2623]: E0129 16:26:04.314166 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:04.314248 kubelet[2623]: E0129 16:26:04.314213 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" podUID="5f804998-68b0-408c-beb0-2887c4ad4908" Jan 29 16:26:05.163637 kubelet[2623]: I0129 16:26:05.163594 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c" Jan 29 16:26:05.164449 containerd[1509]: time="2025-01-29T16:26:05.164378570Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:05.164887 containerd[1509]: time="2025-01-29T16:26:05.164624843Z" level=info msg="Ensure that sandbox 3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c in task-service has been cleanup successfully" Jan 29 16:26:05.164958 kubelet[2623]: I0129 16:26:05.164475 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e" Jan 29 16:26:05.165121 containerd[1509]: time="2025-01-29T16:26:05.165099905Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:05.165276 containerd[1509]: time="2025-01-29T16:26:05.165244072Z" level=info msg="Ensure that sandbox e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e in task-service has been cleanup successfully" Jan 29 16:26:05.165951 containerd[1509]: time="2025-01-29T16:26:05.165818795Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:05.165951 containerd[1509]: time="2025-01-29T16:26:05.165873831Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:05.166007 containerd[1509]: time="2025-01-29T16:26:05.165949997Z" level=info msg="TearDown network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" successfully" Jan 29 16:26:05.166007 containerd[1509]: time="2025-01-29T16:26:05.165983141Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" returns successfully" Jan 29 16:26:05.167190 kubelet[2623]: I0129 16:26:05.167047 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4" Jan 29 16:26:05.167231 containerd[1509]: time="2025-01-29T16:26:05.166903458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:1,}" Jan 29 16:26:05.167450 containerd[1509]: time="2025-01-29T16:26:05.167303496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:1,}" Jan 29 16:26:05.168120 containerd[1509]: time="2025-01-29T16:26:05.168081739Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:05.168212 systemd[1]: run-netns-cni\x2d8d09d827\x2dec86\x2d207e\x2d7a34\x2d1eef1588331d.mount: Deactivated successfully. Jan 29 16:26:05.168371 systemd[1]: run-netns-cni\x2dd8fcf22f\x2d077c\x2d932f\x2d91b9\x2da95ac10ee336.mount: Deactivated successfully. Jan 29 16:26:05.168575 kubelet[2623]: I0129 16:26:05.168520 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e" Jan 29 16:26:05.168601 containerd[1509]: time="2025-01-29T16:26:05.168218702Z" level=info msg="Ensure that sandbox c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4 in task-service has been cleanup successfully" Jan 29 16:26:05.169305 containerd[1509]: time="2025-01-29T16:26:05.168804608Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:05.169305 containerd[1509]: time="2025-01-29T16:26:05.168938735Z" level=info msg="Ensure that sandbox de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e in task-service has been cleanup successfully" Jan 29 16:26:05.169305 containerd[1509]: time="2025-01-29T16:26:05.169284860Z" level=info msg="TearDown network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" successfully" Jan 29 16:26:05.169305 containerd[1509]: time="2025-01-29T16:26:05.169296852Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" returns successfully" Jan 29 16:26:05.169581 containerd[1509]: time="2025-01-29T16:26:05.169560198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:1,}" Jan 29 16:26:05.170132 containerd[1509]: time="2025-01-29T16:26:05.170004070Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:05.170132 containerd[1509]: time="2025-01-29T16:26:05.170022476Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:05.170196 kubelet[2623]: E0129 16:26:05.170171 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:05.170829 containerd[1509]: time="2025-01-29T16:26:05.170421873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:1,}" Jan 29 16:26:05.170948 kubelet[2623]: I0129 16:26:05.170732 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c" Jan 29 16:26:05.171259 containerd[1509]: time="2025-01-29T16:26:05.171230836Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:05.171465 containerd[1509]: time="2025-01-29T16:26:05.171443514Z" level=info msg="Ensure that sandbox 80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c in task-service has been cleanup successfully" Jan 29 16:26:05.171683 containerd[1509]: time="2025-01-29T16:26:05.171664718Z" level=info msg="TearDown network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" successfully" Jan 29 16:26:05.171683 containerd[1509]: time="2025-01-29T16:26:05.171680389Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" returns successfully" Jan 29 16:26:05.171853 kubelet[2623]: E0129 16:26:05.171826 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:05.171989 systemd[1]: run-netns-cni\x2db3e97589\x2dfe8e\x2ddcaf\x2d0d5f\x2da23c6b35718c.mount: Deactivated successfully. Jan 29 16:26:05.172099 containerd[1509]: time="2025-01-29T16:26:05.172026072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:1,}" Jan 29 16:26:05.172277 systemd[1]: run-netns-cni\x2d9fabf210\x2d9886\x2d61e6\x2d0700\x2dfa6bcc6b3fe4.mount: Deactivated successfully. Jan 29 16:26:05.175602 systemd[1]: run-netns-cni\x2dc547c1f4\x2dfbb9\x2d02a0\x2d271a\x2ddf9b1bc33f0e.mount: Deactivated successfully. Jan 29 16:26:05.424448 containerd[1509]: time="2025-01-29T16:26:05.422681218Z" level=error msg="Failed to destroy network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.424448 containerd[1509]: time="2025-01-29T16:26:05.423083180Z" level=error msg="encountered an error cleaning up failed sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.424448 containerd[1509]: time="2025-01-29T16:26:05.423141842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.424676 kubelet[2623]: E0129 16:26:05.423391 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.424676 kubelet[2623]: E0129 16:26:05.423461 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:05.424676 kubelet[2623]: E0129 16:26:05.423705 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:05.424813 kubelet[2623]: E0129 16:26:05.423769 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" podUID="d3ce4654-53de-4d6a-8744-f657f07eba4f" Jan 29 16:26:05.429214 containerd[1509]: time="2025-01-29T16:26:05.429171518Z" level=error msg="Failed to destroy network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.429543 containerd[1509]: time="2025-01-29T16:26:05.429515409Z" level=error msg="encountered an error cleaning up failed sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.429634 containerd[1509]: time="2025-01-29T16:26:05.429569132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.430183 kubelet[2623]: E0129 16:26:05.429755 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.430183 kubelet[2623]: E0129 16:26:05.429788 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:05.430183 kubelet[2623]: E0129 16:26:05.429805 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:05.430309 kubelet[2623]: E0129 16:26:05.429845 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" podUID="5f804998-68b0-408c-beb0-2887c4ad4908" Jan 29 16:26:05.431136 containerd[1509]: time="2025-01-29T16:26:05.430518053Z" level=error msg="Failed to destroy network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.431753 containerd[1509]: time="2025-01-29T16:26:05.431724019Z" level=error msg="encountered an error cleaning up failed sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.431822 containerd[1509]: time="2025-01-29T16:26:05.431765779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.432510 kubelet[2623]: E0129 16:26:05.432282 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.432510 kubelet[2623]: E0129 16:26:05.432313 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:05.432510 kubelet[2623]: E0129 16:26:05.432339 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:05.432639 kubelet[2623]: E0129 16:26:05.432367 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" podUID="db40db35-a526-4e56-80d1-8bc8cd956a1c" Jan 29 16:26:05.433932 containerd[1509]: time="2025-01-29T16:26:05.433880489Z" level=error msg="Failed to destroy network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.434541 containerd[1509]: time="2025-01-29T16:26:05.434514166Z" level=error msg="encountered an error cleaning up failed sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.434677 containerd[1509]: time="2025-01-29T16:26:05.434652972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.434951 kubelet[2623]: E0129 16:26:05.434895 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.434951 kubelet[2623]: E0129 16:26:05.434946 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:05.435052 kubelet[2623]: E0129 16:26:05.434961 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:05.435052 kubelet[2623]: E0129 16:26:05.434994 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6kc2r" podUID="d39cd512-c288-44a0-b875-c359ef74dd3f" Jan 29 16:26:05.448108 containerd[1509]: time="2025-01-29T16:26:05.448056486Z" level=error msg="Failed to destroy network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.448534 containerd[1509]: time="2025-01-29T16:26:05.448506680Z" level=error msg="encountered an error cleaning up failed sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.448604 containerd[1509]: time="2025-01-29T16:26:05.448577876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.448832 kubelet[2623]: E0129 16:26:05.448794 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.448889 kubelet[2623]: E0129 16:26:05.448859 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:05.448922 kubelet[2623]: E0129 16:26:05.448884 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:05.448975 kubelet[2623]: E0129 16:26:05.448940 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-llv2c" podUID="5b88c73e-075c-4156-a283-4de15bccf36a" Jan 29 16:26:05.476964 systemd[1]: Created slice kubepods-besteffort-pod9cc09215_26d9_4b38_816c_abf4c3c659ad.slice - libcontainer container kubepods-besteffort-pod9cc09215_26d9_4b38_816c_abf4c3c659ad.slice. Jan 29 16:26:05.479310 containerd[1509]: time="2025-01-29T16:26:05.479280160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:05.545756 containerd[1509]: time="2025-01-29T16:26:05.545706541Z" level=error msg="Failed to destroy network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.546093 containerd[1509]: time="2025-01-29T16:26:05.546068356Z" level=error msg="encountered an error cleaning up failed sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.546143 containerd[1509]: time="2025-01-29T16:26:05.546125885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.546380 kubelet[2623]: E0129 16:26:05.546339 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:05.546447 kubelet[2623]: E0129 16:26:05.546415 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:05.546447 kubelet[2623]: E0129 16:26:05.546436 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:05.546491 kubelet[2623]: E0129 16:26:05.546474 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:05.767896 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6-shm.mount: Deactivated successfully. Jan 29 16:26:06.173484 kubelet[2623]: I0129 16:26:06.173435 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6" Jan 29 16:26:06.173986 containerd[1509]: time="2025-01-29T16:26:06.173954777Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:06.174316 containerd[1509]: time="2025-01-29T16:26:06.174232540Z" level=info msg="Ensure that sandbox 89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6 in task-service has been cleanup successfully" Jan 29 16:26:06.174532 containerd[1509]: time="2025-01-29T16:26:06.174460798Z" level=info msg="TearDown network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" successfully" Jan 29 16:26:06.174532 containerd[1509]: time="2025-01-29T16:26:06.174473211Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" returns successfully" Jan 29 16:26:06.175121 kubelet[2623]: I0129 16:26:06.174707 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b" Jan 29 16:26:06.175307 containerd[1509]: time="2025-01-29T16:26:06.175286571Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:06.175467 containerd[1509]: time="2025-01-29T16:26:06.175445787Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:06.175560 containerd[1509]: time="2025-01-29T16:26:06.175505892Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:06.175598 containerd[1509]: time="2025-01-29T16:26:06.175549136Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" Jan 29 16:26:06.175961 containerd[1509]: time="2025-01-29T16:26:06.175765981Z" level=info msg="Ensure that sandbox 12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b in task-service has been cleanup successfully" Jan 29 16:26:06.176319 containerd[1509]: time="2025-01-29T16:26:06.176274988Z" level=info msg="TearDown network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" successfully" Jan 29 16:26:06.176529 containerd[1509]: time="2025-01-29T16:26:06.176411420Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" returns successfully" Jan 29 16:26:06.176685 containerd[1509]: time="2025-01-29T16:26:06.176569192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:2,}" Jan 29 16:26:06.177327 systemd[1]: run-netns-cni\x2d615d0d76\x2d6d68\x2df7f0\x2d3056\x2d1978765915bb.mount: Deactivated successfully. Jan 29 16:26:06.177755 containerd[1509]: time="2025-01-29T16:26:06.177387983Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:06.177755 containerd[1509]: time="2025-01-29T16:26:06.177479058Z" level=info msg="TearDown network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" successfully" Jan 29 16:26:06.177755 containerd[1509]: time="2025-01-29T16:26:06.177488165Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" returns successfully" Jan 29 16:26:06.178097 containerd[1509]: time="2025-01-29T16:26:06.178071253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:2,}" Jan 29 16:26:06.178475 kubelet[2623]: I0129 16:26:06.178435 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b" Jan 29 16:26:06.179620 containerd[1509]: time="2025-01-29T16:26:06.179596189Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" Jan 29 16:26:06.180033 containerd[1509]: time="2025-01-29T16:26:06.179879242Z" level=info msg="Ensure that sandbox 73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b in task-service has been cleanup successfully" Jan 29 16:26:06.180109 containerd[1509]: time="2025-01-29T16:26:06.180093814Z" level=info msg="TearDown network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" successfully" Jan 29 16:26:06.180171 containerd[1509]: time="2025-01-29T16:26:06.180158678Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" returns successfully" Jan 29 16:26:06.180650 containerd[1509]: time="2025-01-29T16:26:06.180616436Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:06.180800 kubelet[2623]: I0129 16:26:06.180775 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089" Jan 29 16:26:06.180978 containerd[1509]: time="2025-01-29T16:26:06.180961378Z" level=info msg="TearDown network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" successfully" Jan 29 16:26:06.181136 containerd[1509]: time="2025-01-29T16:26:06.181074966Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" returns successfully" Jan 29 16:26:06.181297 containerd[1509]: time="2025-01-29T16:26:06.181217299Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:06.181516 containerd[1509]: time="2025-01-29T16:26:06.181440447Z" level=info msg="Ensure that sandbox 6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089 in task-service has been cleanup successfully" Jan 29 16:26:06.181803 kubelet[2623]: E0129 16:26:06.181750 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:06.181919 systemd[1]: run-netns-cni\x2d0f3eb94e\x2d05f9\x2dc730\x2da8d7\x2dd3b7ee80544c.mount: Deactivated successfully. Jan 29 16:26:06.182065 containerd[1509]: time="2025-01-29T16:26:06.182034707Z" level=info msg="TearDown network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" successfully" Jan 29 16:26:06.182108 containerd[1509]: time="2025-01-29T16:26:06.182063713Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" returns successfully" Jan 29 16:26:06.182960 containerd[1509]: time="2025-01-29T16:26:06.182929374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:2,}" Jan 29 16:26:06.184609 containerd[1509]: time="2025-01-29T16:26:06.184573668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:1,}" Jan 29 16:26:06.185586 kubelet[2623]: I0129 16:26:06.185193 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce" Jan 29 16:26:06.185243 systemd[1]: run-netns-cni\x2d2e4763fd\x2df14a\x2dd435\x2ddfcc\x2d580efa9c7ce7.mount: Deactivated successfully. Jan 29 16:26:06.185796 containerd[1509]: time="2025-01-29T16:26:06.185690300Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" Jan 29 16:26:06.185332 systemd[1]: run-netns-cni\x2dec5273e1\x2d090f\x2d7b42\x2df839\x2d228d570ef8bd.mount: Deactivated successfully. Jan 29 16:26:06.185880 containerd[1509]: time="2025-01-29T16:26:06.185852301Z" level=info msg="Ensure that sandbox fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce in task-service has been cleanup successfully" Jan 29 16:26:06.186386 containerd[1509]: time="2025-01-29T16:26:06.186271916Z" level=info msg="TearDown network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" successfully" Jan 29 16:26:06.186386 containerd[1509]: time="2025-01-29T16:26:06.186301222Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" returns successfully" Jan 29 16:26:06.187582 containerd[1509]: time="2025-01-29T16:26:06.187562292Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:06.187653 containerd[1509]: time="2025-01-29T16:26:06.187637986Z" level=info msg="TearDown network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" successfully" Jan 29 16:26:06.187653 containerd[1509]: time="2025-01-29T16:26:06.187650090Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" returns successfully" Jan 29 16:26:06.188160 containerd[1509]: time="2025-01-29T16:26:06.188141232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:2,}" Jan 29 16:26:06.188597 kubelet[2623]: I0129 16:26:06.188573 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09" Jan 29 16:26:06.189141 containerd[1509]: time="2025-01-29T16:26:06.188986103Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:06.189011 systemd[1]: run-netns-cni\x2d7a6e887c\x2d9225\x2d6a3b\x2d8ebe\x2db9679a5f8c64.mount: Deactivated successfully. Jan 29 16:26:06.189250 containerd[1509]: time="2025-01-29T16:26:06.189141882Z" level=info msg="Ensure that sandbox bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09 in task-service has been cleanup successfully" Jan 29 16:26:06.189707 containerd[1509]: time="2025-01-29T16:26:06.189520087Z" level=info msg="TearDown network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" successfully" Jan 29 16:26:06.189707 containerd[1509]: time="2025-01-29T16:26:06.189535427Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" returns successfully" Jan 29 16:26:06.189978 containerd[1509]: time="2025-01-29T16:26:06.189965632Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:06.190085 containerd[1509]: time="2025-01-29T16:26:06.190034945Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:06.190085 containerd[1509]: time="2025-01-29T16:26:06.190047810Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:06.190362 kubelet[2623]: E0129 16:26:06.190223 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:06.190585 containerd[1509]: time="2025-01-29T16:26:06.190550865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:2,}" Jan 29 16:26:06.776752 systemd[1]: run-netns-cni\x2df1937a01\x2d0f18\x2d37af\x2d2145\x2d4993027466fb.mount: Deactivated successfully. Jan 29 16:26:06.809027 containerd[1509]: time="2025-01-29T16:26:06.808979053Z" level=error msg="Failed to destroy network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.812557 containerd[1509]: time="2025-01-29T16:26:06.811817328Z" level=error msg="encountered an error cleaning up failed sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.812557 containerd[1509]: time="2025-01-29T16:26:06.811888004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.813640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a-shm.mount: Deactivated successfully. Jan 29 16:26:06.815339 kubelet[2623]: E0129 16:26:06.813844 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.815339 kubelet[2623]: E0129 16:26:06.813897 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:06.815339 kubelet[2623]: E0129 16:26:06.813919 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:06.815458 kubelet[2623]: E0129 16:26:06.813965 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" podUID="5f804998-68b0-408c-beb0-2887c4ad4908" Jan 29 16:26:06.820541 containerd[1509]: time="2025-01-29T16:26:06.820345519Z" level=error msg="Failed to destroy network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.821137 containerd[1509]: time="2025-01-29T16:26:06.821109524Z" level=error msg="encountered an error cleaning up failed sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.821534 containerd[1509]: time="2025-01-29T16:26:06.821509041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.823787 kubelet[2623]: E0129 16:26:06.821857 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.823787 kubelet[2623]: E0129 16:26:06.821921 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:06.823787 kubelet[2623]: E0129 16:26:06.821945 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:06.823933 kubelet[2623]: E0129 16:26:06.821992 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-llv2c" podUID="5b88c73e-075c-4156-a283-4de15bccf36a" Jan 29 16:26:06.825604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69-shm.mount: Deactivated successfully. Jan 29 16:26:06.828101 containerd[1509]: time="2025-01-29T16:26:06.828057804Z" level=error msg="Failed to destroy network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.830126 containerd[1509]: time="2025-01-29T16:26:06.830077228Z" level=error msg="encountered an error cleaning up failed sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.830213 containerd[1509]: time="2025-01-29T16:26:06.830139298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.831290 kubelet[2623]: E0129 16:26:06.830372 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.831290 kubelet[2623]: E0129 16:26:06.830450 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:06.831290 kubelet[2623]: E0129 16:26:06.830473 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:06.831495 kubelet[2623]: E0129 16:26:06.830532 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6kc2r" podUID="d39cd512-c288-44a0-b875-c359ef74dd3f" Jan 29 16:26:06.831541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4-shm.mount: Deactivated successfully. Jan 29 16:26:06.834749 containerd[1509]: time="2025-01-29T16:26:06.834708022Z" level=error msg="Failed to destroy network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.835841 containerd[1509]: time="2025-01-29T16:26:06.835764409Z" level=error msg="encountered an error cleaning up failed sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.837097 containerd[1509]: time="2025-01-29T16:26:06.837035667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.837065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0-shm.mount: Deactivated successfully. Jan 29 16:26:06.837967 kubelet[2623]: E0129 16:26:06.837526 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.837967 kubelet[2623]: E0129 16:26:06.837591 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:06.837967 kubelet[2623]: E0129 16:26:06.837618 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:06.838195 kubelet[2623]: E0129 16:26:06.837665 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" podUID="db40db35-a526-4e56-80d1-8bc8cd956a1c" Jan 29 16:26:06.845267 containerd[1509]: time="2025-01-29T16:26:06.845152961Z" level=error msg="Failed to destroy network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.846075 containerd[1509]: time="2025-01-29T16:26:06.846033019Z" level=error msg="encountered an error cleaning up failed sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.846497 containerd[1509]: time="2025-01-29T16:26:06.846473213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.846933 kubelet[2623]: E0129 16:26:06.846898 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.847083 kubelet[2623]: E0129 16:26:06.847067 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:06.847229 kubelet[2623]: E0129 16:26:06.847166 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:06.847367 kubelet[2623]: E0129 16:26:06.847214 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" podUID="d3ce4654-53de-4d6a-8744-f657f07eba4f" Jan 29 16:26:06.859289 containerd[1509]: time="2025-01-29T16:26:06.859237540Z" level=error msg="Failed to destroy network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.860012 containerd[1509]: time="2025-01-29T16:26:06.859855967Z" level=error msg="encountered an error cleaning up failed sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.860012 containerd[1509]: time="2025-01-29T16:26:06.859917876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.860156 kubelet[2623]: E0129 16:26:06.860110 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:06.860234 kubelet[2623]: E0129 16:26:06.860175 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:06.860234 kubelet[2623]: E0129 16:26:06.860200 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:06.860284 kubelet[2623]: E0129 16:26:06.860246 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:07.193322 kubelet[2623]: I0129 16:26:07.193214 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0" Jan 29 16:26:07.194433 containerd[1509]: time="2025-01-29T16:26:07.194116059Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\"" Jan 29 16:26:07.195192 containerd[1509]: time="2025-01-29T16:26:07.195066811Z" level=info msg="Ensure that sandbox 28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0 in task-service has been cleanup successfully" Jan 29 16:26:07.195351 containerd[1509]: time="2025-01-29T16:26:07.195289919Z" level=info msg="TearDown network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" successfully" Jan 29 16:26:07.195351 containerd[1509]: time="2025-01-29T16:26:07.195304316Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" returns successfully" Jan 29 16:26:07.195792 containerd[1509]: time="2025-01-29T16:26:07.195752766Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" Jan 29 16:26:07.195913 containerd[1509]: time="2025-01-29T16:26:07.195880812Z" level=info msg="TearDown network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" successfully" Jan 29 16:26:07.195913 containerd[1509]: time="2025-01-29T16:26:07.195902453Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" returns successfully" Jan 29 16:26:07.197367 containerd[1509]: time="2025-01-29T16:26:07.196459220Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:07.197367 containerd[1509]: time="2025-01-29T16:26:07.196540948Z" level=info msg="TearDown network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" successfully" Jan 29 16:26:07.197367 containerd[1509]: time="2025-01-29T16:26:07.196551397Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" returns successfully" Jan 29 16:26:07.197367 containerd[1509]: time="2025-01-29T16:26:07.196937667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:3,}" Jan 29 16:26:07.197808 kubelet[2623]: I0129 16:26:07.196994 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4" Jan 29 16:26:07.197870 containerd[1509]: time="2025-01-29T16:26:07.197734285Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" Jan 29 16:26:07.197954 containerd[1509]: time="2025-01-29T16:26:07.197925141Z" level=info msg="Ensure that sandbox 5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4 in task-service has been cleanup successfully" Jan 29 16:26:07.198129 containerd[1509]: time="2025-01-29T16:26:07.198104915Z" level=info msg="TearDown network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" successfully" Jan 29 16:26:07.198129 containerd[1509]: time="2025-01-29T16:26:07.198122439Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" returns successfully" Jan 29 16:26:07.198584 containerd[1509]: time="2025-01-29T16:26:07.198383610Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:07.200236 kubelet[2623]: I0129 16:26:07.200206 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69" Jan 29 16:26:07.201152 containerd[1509]: time="2025-01-29T16:26:07.200815712Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\"" Jan 29 16:26:07.201152 containerd[1509]: time="2025-01-29T16:26:07.201026646Z" level=info msg="Ensure that sandbox 04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69 in task-service has been cleanup successfully" Jan 29 16:26:07.201697 containerd[1509]: time="2025-01-29T16:26:07.201679689Z" level=info msg="TearDown network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" successfully" Jan 29 16:26:07.201820 containerd[1509]: time="2025-01-29T16:26:07.201794358Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" returns successfully" Jan 29 16:26:07.202177 containerd[1509]: time="2025-01-29T16:26:07.202116286Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" Jan 29 16:26:07.202308 containerd[1509]: time="2025-01-29T16:26:07.202224553Z" level=info msg="TearDown network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" successfully" Jan 29 16:26:07.202308 containerd[1509]: time="2025-01-29T16:26:07.202236296Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" returns successfully" Jan 29 16:26:07.202533 containerd[1509]: time="2025-01-29T16:26:07.202473952Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:07.202623 containerd[1509]: time="2025-01-29T16:26:07.202581758Z" level=info msg="TearDown network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" successfully" Jan 29 16:26:07.202623 containerd[1509]: time="2025-01-29T16:26:07.202592519Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" returns successfully" Jan 29 16:26:07.202872 kubelet[2623]: E0129 16:26:07.202849 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:07.202918 containerd[1509]: time="2025-01-29T16:26:07.202858018Z" level=info msg="TearDown network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" successfully" Jan 29 16:26:07.202918 containerd[1509]: time="2025-01-29T16:26:07.202872856Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" returns successfully" Jan 29 16:26:07.203130 containerd[1509]: time="2025-01-29T16:26:07.203078921Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:07.203199 kubelet[2623]: I0129 16:26:07.203094 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833" Jan 29 16:26:07.203252 containerd[1509]: time="2025-01-29T16:26:07.203165648Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:07.203252 containerd[1509]: time="2025-01-29T16:26:07.203176408Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:07.203474 containerd[1509]: time="2025-01-29T16:26:07.203317278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:3,}" Jan 29 16:26:07.203854 kubelet[2623]: E0129 16:26:07.203837 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:07.204021 containerd[1509]: time="2025-01-29T16:26:07.203989026Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" Jan 29 16:26:07.204287 containerd[1509]: time="2025-01-29T16:26:07.204204278Z" level=info msg="Ensure that sandbox 96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833 in task-service has been cleanup successfully" Jan 29 16:26:07.204555 containerd[1509]: time="2025-01-29T16:26:07.204499414Z" level=info msg="TearDown network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" successfully" Jan 29 16:26:07.204555 containerd[1509]: time="2025-01-29T16:26:07.204521957Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" returns successfully" Jan 29 16:26:07.204854 containerd[1509]: time="2025-01-29T16:26:07.204764372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:3,}" Jan 29 16:26:07.206446 containerd[1509]: time="2025-01-29T16:26:07.205292094Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:07.206446 containerd[1509]: time="2025-01-29T16:26:07.205377568Z" level=info msg="TearDown network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" successfully" Jan 29 16:26:07.206446 containerd[1509]: time="2025-01-29T16:26:07.205386866Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" returns successfully" Jan 29 16:26:07.207187 containerd[1509]: time="2025-01-29T16:26:07.207156848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:2,}" Jan 29 16:26:07.208126 kubelet[2623]: I0129 16:26:07.208104 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509" Jan 29 16:26:07.208837 containerd[1509]: time="2025-01-29T16:26:07.208558015Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" Jan 29 16:26:07.208837 containerd[1509]: time="2025-01-29T16:26:07.208710437Z" level=info msg="Ensure that sandbox d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509 in task-service has been cleanup successfully" Jan 29 16:26:07.209458 containerd[1509]: time="2025-01-29T16:26:07.209423554Z" level=info msg="TearDown network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" successfully" Jan 29 16:26:07.209563 containerd[1509]: time="2025-01-29T16:26:07.209550046Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" returns successfully" Jan 29 16:26:07.210113 containerd[1509]: time="2025-01-29T16:26:07.210083159Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:07.210715 containerd[1509]: time="2025-01-29T16:26:07.210526839Z" level=info msg="TearDown network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" successfully" Jan 29 16:26:07.210819 containerd[1509]: time="2025-01-29T16:26:07.210802788Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" returns successfully" Jan 29 16:26:07.210940 kubelet[2623]: I0129 16:26:07.210924 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a" Jan 29 16:26:07.211333 containerd[1509]: time="2025-01-29T16:26:07.211293218Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\"" Jan 29 16:26:07.211498 containerd[1509]: time="2025-01-29T16:26:07.211471280Z" level=info msg="Ensure that sandbox 743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a in task-service has been cleanup successfully" Jan 29 16:26:07.211723 containerd[1509]: time="2025-01-29T16:26:07.211704177Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:07.212371 containerd[1509]: time="2025-01-29T16:26:07.212314887Z" level=info msg="TearDown network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" successfully" Jan 29 16:26:07.212371 containerd[1509]: time="2025-01-29T16:26:07.212368009Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" returns successfully" Jan 29 16:26:07.212501 containerd[1509]: time="2025-01-29T16:26:07.212483049Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:07.212678 containerd[1509]: time="2025-01-29T16:26:07.212620824Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:07.213386 containerd[1509]: time="2025-01-29T16:26:07.213252795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:3,}" Jan 29 16:26:07.213386 containerd[1509]: time="2025-01-29T16:26:07.213294245Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" Jan 29 16:26:07.213478 containerd[1509]: time="2025-01-29T16:26:07.213441517Z" level=info msg="TearDown network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" successfully" Jan 29 16:26:07.213478 containerd[1509]: time="2025-01-29T16:26:07.213454151Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" returns successfully" Jan 29 16:26:07.214517 containerd[1509]: time="2025-01-29T16:26:07.213969850Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:07.214798 containerd[1509]: time="2025-01-29T16:26:07.214778029Z" level=info msg="TearDown network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" successfully" Jan 29 16:26:07.214905 containerd[1509]: time="2025-01-29T16:26:07.214846801Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" returns successfully" Jan 29 16:26:07.215281 containerd[1509]: time="2025-01-29T16:26:07.215256967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:3,}" Jan 29 16:26:07.363518 containerd[1509]: time="2025-01-29T16:26:07.363428202Z" level=error msg="Failed to destroy network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.363856 containerd[1509]: time="2025-01-29T16:26:07.363819613Z" level=error msg="encountered an error cleaning up failed sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.363933 containerd[1509]: time="2025-01-29T16:26:07.363881811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.364145 kubelet[2623]: E0129 16:26:07.364105 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.364349 kubelet[2623]: E0129 16:26:07.364321 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:07.364349 kubelet[2623]: E0129 16:26:07.364348 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:07.364486 kubelet[2623]: E0129 16:26:07.364418 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6kc2r" podUID="d39cd512-c288-44a0-b875-c359ef74dd3f" Jan 29 16:26:07.371243 containerd[1509]: time="2025-01-29T16:26:07.371072867Z" level=error msg="Failed to destroy network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.372706 containerd[1509]: time="2025-01-29T16:26:07.372660421Z" level=error msg="encountered an error cleaning up failed sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.372774 containerd[1509]: time="2025-01-29T16:26:07.372747057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.373032 kubelet[2623]: E0129 16:26:07.372953 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.373032 kubelet[2623]: E0129 16:26:07.373012 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:07.373102 kubelet[2623]: E0129 16:26:07.373033 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:07.373102 kubelet[2623]: E0129 16:26:07.373082 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-llv2c" podUID="5b88c73e-075c-4156-a283-4de15bccf36a" Jan 29 16:26:07.388019 containerd[1509]: time="2025-01-29T16:26:07.387263195Z" level=error msg="Failed to destroy network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.388294 containerd[1509]: time="2025-01-29T16:26:07.388262010Z" level=error msg="encountered an error cleaning up failed sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.388369 containerd[1509]: time="2025-01-29T16:26:07.388334539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.388931 kubelet[2623]: E0129 16:26:07.388896 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.388996 kubelet[2623]: E0129 16:26:07.388947 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:07.388996 kubelet[2623]: E0129 16:26:07.388966 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:07.389048 kubelet[2623]: E0129 16:26:07.389001 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" podUID="db40db35-a526-4e56-80d1-8bc8cd956a1c" Jan 29 16:26:07.394030 containerd[1509]: time="2025-01-29T16:26:07.393994290Z" level=error msg="Failed to destroy network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.394534 containerd[1509]: time="2025-01-29T16:26:07.394511621Z" level=error msg="encountered an error cleaning up failed sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.394698 containerd[1509]: time="2025-01-29T16:26:07.394679863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.394987 kubelet[2623]: E0129 16:26:07.394955 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.395107 kubelet[2623]: E0129 16:26:07.395087 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:07.395182 kubelet[2623]: E0129 16:26:07.395152 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:07.395293 kubelet[2623]: E0129 16:26:07.395254 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" podUID="5f804998-68b0-408c-beb0-2887c4ad4908" Jan 29 16:26:07.396905 containerd[1509]: time="2025-01-29T16:26:07.396803344Z" level=error msg="Failed to destroy network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.397390 containerd[1509]: time="2025-01-29T16:26:07.397324163Z" level=error msg="encountered an error cleaning up failed sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.397614 containerd[1509]: time="2025-01-29T16:26:07.397586005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.397845 kubelet[2623]: E0129 16:26:07.397807 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.397914 kubelet[2623]: E0129 16:26:07.397854 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:07.397914 kubelet[2623]: E0129 16:26:07.397870 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:07.397914 kubelet[2623]: E0129 16:26:07.397898 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" podUID="d3ce4654-53de-4d6a-8744-f657f07eba4f" Jan 29 16:26:07.417752 containerd[1509]: time="2025-01-29T16:26:07.417707700Z" level=error msg="Failed to destroy network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.418092 containerd[1509]: time="2025-01-29T16:26:07.418063502Z" level=error msg="encountered an error cleaning up failed sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.418129 containerd[1509]: time="2025-01-29T16:26:07.418113007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.418321 kubelet[2623]: E0129 16:26:07.418274 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:07.418321 kubelet[2623]: E0129 16:26:07.418312 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:07.418321 kubelet[2623]: E0129 16:26:07.418327 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:07.418622 kubelet[2623]: E0129 16:26:07.418356 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:07.768027 systemd[1]: run-netns-cni\x2d9b982217\x2dafe2\x2dd110\x2da20d\x2d60de52e94984.mount: Deactivated successfully. Jan 29 16:26:07.768437 systemd[1]: run-netns-cni\x2d4b8fee4f\x2d6f6c\x2d57fa\x2db404\x2d9eb6bb974eda.mount: Deactivated successfully. Jan 29 16:26:07.768514 systemd[1]: run-netns-cni\x2d17c3712a\x2dc00d\x2d3762\x2dd9d0\x2dc4cf412fa205.mount: Deactivated successfully. Jan 29 16:26:07.768588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509-shm.mount: Deactivated successfully. Jan 29 16:26:07.768668 systemd[1]: run-netns-cni\x2dce484ba5\x2dc07a\x2d664d\x2dbb66\x2d58bf9fbbfd54.mount: Deactivated successfully. Jan 29 16:26:07.768741 systemd[1]: run-netns-cni\x2d947ee802\x2dbc31\x2dc737\x2d8b95\x2d790c52c071b6.mount: Deactivated successfully. Jan 29 16:26:07.768816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833-shm.mount: Deactivated successfully. Jan 29 16:26:07.768894 systemd[1]: run-netns-cni\x2dd3583052\x2d4b3d\x2d7dd2\x2d6ddc\x2d61b6fdcc5470.mount: Deactivated successfully. Jan 29 16:26:08.214664 kubelet[2623]: I0129 16:26:08.214616 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2" Jan 29 16:26:08.215386 containerd[1509]: time="2025-01-29T16:26:08.215333592Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\"" Jan 29 16:26:08.215701 containerd[1509]: time="2025-01-29T16:26:08.215595644Z" level=info msg="Ensure that sandbox ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2 in task-service has been cleanup successfully" Jan 29 16:26:08.215833 containerd[1509]: time="2025-01-29T16:26:08.215814484Z" level=info msg="TearDown network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" successfully" Jan 29 16:26:08.215833 containerd[1509]: time="2025-01-29T16:26:08.215830213Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" returns successfully" Jan 29 16:26:08.218290 containerd[1509]: time="2025-01-29T16:26:08.217333333Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" Jan 29 16:26:08.218290 containerd[1509]: time="2025-01-29T16:26:08.217441451Z" level=info msg="TearDown network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" successfully" Jan 29 16:26:08.218290 containerd[1509]: time="2025-01-29T16:26:08.217451951Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" returns successfully" Jan 29 16:26:08.218065 systemd[1]: run-netns-cni\x2d0822fe23\x2d4d9c\x2d7612\x2d4d8a\x2df7a4f6507191.mount: Deactivated successfully. Jan 29 16:26:08.219222 containerd[1509]: time="2025-01-29T16:26:08.218686756Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:08.219222 containerd[1509]: time="2025-01-29T16:26:08.218787168Z" level=info msg="TearDown network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" successfully" Jan 29 16:26:08.219222 containerd[1509]: time="2025-01-29T16:26:08.218796707Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" returns successfully" Jan 29 16:26:08.220153 containerd[1509]: time="2025-01-29T16:26:08.220124120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:3,}" Jan 29 16:26:08.221210 kubelet[2623]: I0129 16:26:08.221184 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063" Jan 29 16:26:08.222164 containerd[1509]: time="2025-01-29T16:26:08.221865977Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\"" Jan 29 16:26:08.222164 containerd[1509]: time="2025-01-29T16:26:08.222032486Z" level=info msg="Ensure that sandbox 690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063 in task-service has been cleanup successfully" Jan 29 16:26:08.222580 containerd[1509]: time="2025-01-29T16:26:08.222561149Z" level=info msg="TearDown network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" successfully" Jan 29 16:26:08.222913 containerd[1509]: time="2025-01-29T16:26:08.222896141Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" returns successfully" Jan 29 16:26:08.223326 kubelet[2623]: I0129 16:26:08.223087 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1" Jan 29 16:26:08.223422 containerd[1509]: time="2025-01-29T16:26:08.223104560Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" Jan 29 16:26:08.223422 containerd[1509]: time="2025-01-29T16:26:08.223190295Z" level=info msg="TearDown network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" successfully" Jan 29 16:26:08.223422 containerd[1509]: time="2025-01-29T16:26:08.223199692Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" returns successfully" Jan 29 16:26:08.223806 containerd[1509]: time="2025-01-29T16:26:08.223621089Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:08.223806 containerd[1509]: time="2025-01-29T16:26:08.223727393Z" level=info msg="TearDown network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" successfully" Jan 29 16:26:08.223806 containerd[1509]: time="2025-01-29T16:26:08.223736541Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" returns successfully" Jan 29 16:26:08.223917 containerd[1509]: time="2025-01-29T16:26:08.223789132Z" level=info msg="StopPodSandbox for \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\"" Jan 29 16:26:08.224317 containerd[1509]: time="2025-01-29T16:26:08.224085620Z" level=info msg="Ensure that sandbox eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1 in task-service has been cleanup successfully" Jan 29 16:26:08.224608 systemd[1]: run-netns-cni\x2dd808c592\x2d586b\x2d262a\x2deef4\x2dcff798bf04d8.mount: Deactivated successfully. Jan 29 16:26:08.224752 containerd[1509]: time="2025-01-29T16:26:08.224611827Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:08.224752 containerd[1509]: time="2025-01-29T16:26:08.224615885Z" level=info msg="TearDown network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" successfully" Jan 29 16:26:08.224752 containerd[1509]: time="2025-01-29T16:26:08.224650782Z" level=info msg="StopPodSandbox for \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" returns successfully" Jan 29 16:26:08.224977 containerd[1509]: time="2025-01-29T16:26:08.224956017Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\"" Jan 29 16:26:08.225055 containerd[1509]: time="2025-01-29T16:26:08.225038505Z" level=info msg="TearDown network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" successfully" Jan 29 16:26:08.225093 containerd[1509]: time="2025-01-29T16:26:08.225053143Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" returns successfully" Jan 29 16:26:08.225893 containerd[1509]: time="2025-01-29T16:26:08.225857104Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.225937928Z" level=info msg="TearDown network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" successfully" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.225952907Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" returns successfully" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.226266888Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.226338325Z" level=info msg="TearDown network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" successfully" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.226347954Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" returns successfully" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.226431694Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.226446813Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.227355123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:4,}" Jan 29 16:26:08.227540 containerd[1509]: time="2025-01-29T16:26:08.227527643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:4,}" Jan 29 16:26:08.227787 kubelet[2623]: I0129 16:26:08.227766 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc" Jan 29 16:26:08.228132 systemd[1]: run-netns-cni\x2d2f758719\x2d7871\x2d053b\x2da2ee\x2d26c44aa74a92.mount: Deactivated successfully. Jan 29 16:26:08.228420 containerd[1509]: time="2025-01-29T16:26:08.228341673Z" level=info msg="StopPodSandbox for \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\"" Jan 29 16:26:08.229103 containerd[1509]: time="2025-01-29T16:26:08.229072834Z" level=info msg="Ensure that sandbox c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc in task-service has been cleanup successfully" Jan 29 16:26:08.229284 containerd[1509]: time="2025-01-29T16:26:08.229248651Z" level=info msg="TearDown network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" successfully" Jan 29 16:26:08.229284 containerd[1509]: time="2025-01-29T16:26:08.229266195Z" level=info msg="StopPodSandbox for \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" returns successfully" Jan 29 16:26:08.229891 containerd[1509]: time="2025-01-29T16:26:08.229835425Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\"" Jan 29 16:26:08.229939 containerd[1509]: time="2025-01-29T16:26:08.229924415Z" level=info msg="TearDown network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" successfully" Jan 29 16:26:08.229939 containerd[1509]: time="2025-01-29T16:26:08.229934845Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" returns successfully" Jan 29 16:26:08.230474 containerd[1509]: time="2025-01-29T16:26:08.230385769Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" Jan 29 16:26:08.230878 containerd[1509]: time="2025-01-29T16:26:08.230850029Z" level=info msg="TearDown network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" successfully" Jan 29 16:26:08.230878 containerd[1509]: time="2025-01-29T16:26:08.230869847Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" returns successfully" Jan 29 16:26:08.231201 systemd[1]: run-netns-cni\x2dee1b0287\x2d1f36\x2dcd0a\x2dbc0c\x2df99c73aedea2.mount: Deactivated successfully. Jan 29 16:26:08.232090 containerd[1509]: time="2025-01-29T16:26:08.231629753Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:08.232183 containerd[1509]: time="2025-01-29T16:26:08.232163595Z" level=info msg="TearDown network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" successfully" Jan 29 16:26:08.232183 containerd[1509]: time="2025-01-29T16:26:08.232180257Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" returns successfully" Jan 29 16:26:08.232549 kubelet[2623]: I0129 16:26:08.232532 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57" Jan 29 16:26:08.232914 containerd[1509]: time="2025-01-29T16:26:08.232889346Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\"" Jan 29 16:26:08.233049 containerd[1509]: time="2025-01-29T16:26:08.233024084Z" level=info msg="Ensure that sandbox 619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57 in task-service has been cleanup successfully" Jan 29 16:26:08.233187 containerd[1509]: time="2025-01-29T16:26:08.233163852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:4,}" Jan 29 16:26:08.233291 containerd[1509]: time="2025-01-29T16:26:08.233271377Z" level=info msg="TearDown network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" successfully" Jan 29 16:26:08.233291 containerd[1509]: time="2025-01-29T16:26:08.233288991Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" returns successfully" Jan 29 16:26:08.233751 containerd[1509]: time="2025-01-29T16:26:08.233608393Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" Jan 29 16:26:08.233751 containerd[1509]: time="2025-01-29T16:26:08.233690080Z" level=info msg="TearDown network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" successfully" Jan 29 16:26:08.233751 containerd[1509]: time="2025-01-29T16:26:08.233699678Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" returns successfully" Jan 29 16:26:08.233970 containerd[1509]: time="2025-01-29T16:26:08.233941441Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:08.234143 containerd[1509]: time="2025-01-29T16:26:08.234121737Z" level=info msg="TearDown network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" successfully" Jan 29 16:26:08.234143 containerd[1509]: time="2025-01-29T16:26:08.234139070Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" returns successfully" Jan 29 16:26:08.234509 containerd[1509]: time="2025-01-29T16:26:08.234481636Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:08.234745 containerd[1509]: time="2025-01-29T16:26:08.234730713Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:08.234819 containerd[1509]: time="2025-01-29T16:26:08.234806859Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:08.235140 kubelet[2623]: E0129 16:26:08.235109 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:08.235745 containerd[1509]: time="2025-01-29T16:26:08.235625697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:4,}" Jan 29 16:26:08.235788 kubelet[2623]: I0129 16:26:08.235733 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991" Jan 29 16:26:08.236124 containerd[1509]: time="2025-01-29T16:26:08.236095358Z" level=info msg="StopPodSandbox for \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\"" Jan 29 16:26:08.236300 containerd[1509]: time="2025-01-29T16:26:08.236280001Z" level=info msg="Ensure that sandbox 119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991 in task-service has been cleanup successfully" Jan 29 16:26:08.236510 containerd[1509]: time="2025-01-29T16:26:08.236476458Z" level=info msg="TearDown network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" successfully" Jan 29 16:26:08.236510 containerd[1509]: time="2025-01-29T16:26:08.236494803Z" level=info msg="StopPodSandbox for \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" returns successfully" Jan 29 16:26:08.236788 containerd[1509]: time="2025-01-29T16:26:08.236764640Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\"" Jan 29 16:26:08.237127 containerd[1509]: time="2025-01-29T16:26:08.236864512Z" level=info msg="TearDown network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" successfully" Jan 29 16:26:08.237127 containerd[1509]: time="2025-01-29T16:26:08.236876454Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" returns successfully" Jan 29 16:26:08.237187 containerd[1509]: time="2025-01-29T16:26:08.237144708Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" Jan 29 16:26:08.237274 containerd[1509]: time="2025-01-29T16:26:08.237239460Z" level=info msg="TearDown network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" successfully" Jan 29 16:26:08.237274 containerd[1509]: time="2025-01-29T16:26:08.237251552Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" returns successfully" Jan 29 16:26:08.237631 containerd[1509]: time="2025-01-29T16:26:08.237611192Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:08.237891 containerd[1509]: time="2025-01-29T16:26:08.237869075Z" level=info msg="TearDown network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" successfully" Jan 29 16:26:08.237891 containerd[1509]: time="2025-01-29T16:26:08.237886038Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" returns successfully" Jan 29 16:26:08.238068 kubelet[2623]: E0129 16:26:08.238049 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:08.238416 containerd[1509]: time="2025-01-29T16:26:08.238360427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:4,}" Jan 29 16:26:08.766134 systemd[1]: run-netns-cni\x2d1696facd\x2da66c\x2d7008\x2d0724\x2d53135e91dd1f.mount: Deactivated successfully. Jan 29 16:26:08.766255 systemd[1]: run-netns-cni\x2d08f2c1d6\x2d9383\x2dd01a\x2da467\x2d412237bd6735.mount: Deactivated successfully. Jan 29 16:26:08.870804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945045449.mount: Deactivated successfully. Jan 29 16:26:10.829906 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:39798.service - OpenSSH per-connection server daemon (10.0.0.1:39798). Jan 29 16:26:10.873144 containerd[1509]: time="2025-01-29T16:26:10.873061514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:10.904094 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 39798 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:10.905553 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:10.909241 containerd[1509]: time="2025-01-29T16:26:10.909154487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 16:26:10.916672 systemd-logind[1494]: New session 10 of user core. Jan 29 16:26:10.923749 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:26:10.952445 containerd[1509]: time="2025-01-29T16:26:10.951275075Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:10.965853 containerd[1509]: time="2025-01-29T16:26:10.965503851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:10.967198 containerd[1509]: time="2025-01-29T16:26:10.966852132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.802813586s" Jan 29 16:26:10.967198 containerd[1509]: time="2025-01-29T16:26:10.966917416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 16:26:10.984523 containerd[1509]: time="2025-01-29T16:26:10.984449845Z" level=info msg="CreateContainer within sandbox \"2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:26:11.034285 containerd[1509]: time="2025-01-29T16:26:11.032956148Z" level=error msg="Failed to destroy network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.034285 containerd[1509]: time="2025-01-29T16:26:11.033493405Z" level=error msg="encountered an error cleaning up failed sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.034285 containerd[1509]: time="2025-01-29T16:26:11.033565624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.034604 kubelet[2623]: E0129 16:26:11.033974 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.034604 kubelet[2623]: E0129 16:26:11.034073 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:11.034604 kubelet[2623]: E0129 16:26:11.034120 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" Jan 29 16:26:11.035883 kubelet[2623]: E0129 16:26:11.035281 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-n86vd_calico-apiserver(5f804998-68b0-408c-beb0-2887c4ad4908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" podUID="5f804998-68b0-408c-beb0-2887c4ad4908" Jan 29 16:26:11.040430 containerd[1509]: time="2025-01-29T16:26:11.037680744Z" level=info msg="CreateContainer within sandbox \"2d33faef49792b97eb88da08fe174a20ea3085ac438b330cf96a074dcfdfd579\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0343f05559f5819e46f364d6e04facc38ba1a657f7693d235433d5ff3cce8171\"" Jan 29 16:26:11.041978 containerd[1509]: time="2025-01-29T16:26:11.041935310Z" level=info msg="StartContainer for \"0343f05559f5819e46f364d6e04facc38ba1a657f7693d235433d5ff3cce8171\"" Jan 29 16:26:11.060689 containerd[1509]: time="2025-01-29T16:26:11.060627128Z" level=error msg="Failed to destroy network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.064697 containerd[1509]: time="2025-01-29T16:26:11.064482110Z" level=error msg="encountered an error cleaning up failed sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.064697 containerd[1509]: time="2025-01-29T16:26:11.064570769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.064912 kubelet[2623]: E0129 16:26:11.064848 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.064984 kubelet[2623]: E0129 16:26:11.064920 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:11.064984 kubelet[2623]: E0129 16:26:11.064944 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:11.065056 kubelet[2623]: E0129 16:26:11.064992 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" podUID="d3ce4654-53de-4d6a-8744-f657f07eba4f" Jan 29 16:26:11.072985 containerd[1509]: time="2025-01-29T16:26:11.072800358Z" level=error msg="Failed to destroy network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.073608 containerd[1509]: time="2025-01-29T16:26:11.073581181Z" level=error msg="encountered an error cleaning up failed sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.073759 containerd[1509]: time="2025-01-29T16:26:11.073734976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.074359 kubelet[2623]: E0129 16:26:11.074125 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.074359 kubelet[2623]: E0129 16:26:11.074215 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:11.074359 kubelet[2623]: E0129 16:26:11.074238 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:11.075675 kubelet[2623]: E0129 16:26:11.074726 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:11.102005 containerd[1509]: time="2025-01-29T16:26:11.101002653Z" level=error msg="Failed to destroy network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.105781 containerd[1509]: time="2025-01-29T16:26:11.105569978Z" level=error msg="encountered an error cleaning up failed sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.105781 containerd[1509]: time="2025-01-29T16:26:11.105663016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.106475 kubelet[2623]: E0129 16:26:11.106124 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.106475 kubelet[2623]: E0129 16:26:11.106199 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:11.106475 kubelet[2623]: E0129 16:26:11.106227 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:11.106624 kubelet[2623]: E0129 16:26:11.106282 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6kc2r" podUID="d39cd512-c288-44a0-b875-c359ef74dd3f" Jan 29 16:26:11.120943 containerd[1509]: time="2025-01-29T16:26:11.120732977Z" level=error msg="Failed to destroy network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.121709 containerd[1509]: time="2025-01-29T16:26:11.121577241Z" level=error msg="encountered an error cleaning up failed sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.121709 containerd[1509]: time="2025-01-29T16:26:11.121654849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.122452 kubelet[2623]: E0129 16:26:11.122116 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.122452 kubelet[2623]: E0129 16:26:11.122189 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:11.122452 kubelet[2623]: E0129 16:26:11.122216 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" Jan 29 16:26:11.122594 kubelet[2623]: E0129 16:26:11.122270 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66b4c55cd5-pmg6b_calico-system(db40db35-a526-4e56-80d1-8bc8cd956a1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" podUID="db40db35-a526-4e56-80d1-8bc8cd956a1c" Jan 29 16:26:11.140790 containerd[1509]: time="2025-01-29T16:26:11.140686827Z" level=error msg="Failed to destroy network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.141247 containerd[1509]: time="2025-01-29T16:26:11.141210439Z" level=error msg="encountered an error cleaning up failed sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.141333 containerd[1509]: time="2025-01-29T16:26:11.141286294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.142268 kubelet[2623]: E0129 16:26:11.142010 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.142268 kubelet[2623]: E0129 16:26:11.142099 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:11.142268 kubelet[2623]: E0129 16:26:11.142127 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-llv2c" Jan 29 16:26:11.142543 kubelet[2623]: E0129 16:26:11.142189 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-llv2c_kube-system(5b88c73e-075c-4156-a283-4de15bccf36a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-llv2c" podUID="5b88c73e-075c-4156-a283-4de15bccf36a" Jan 29 16:26:11.169746 systemd[1]: Started cri-containerd-0343f05559f5819e46f364d6e04facc38ba1a657f7693d235433d5ff3cce8171.scope - libcontainer container 0343f05559f5819e46f364d6e04facc38ba1a657f7693d235433d5ff3cce8171. Jan 29 16:26:11.191080 sshd[4302]: Connection closed by 10.0.0.1 port 39798 Jan 29 16:26:11.191801 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:11.198863 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:39798.service: Deactivated successfully. Jan 29 16:26:11.201513 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:26:11.204198 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:26:11.206658 systemd-logind[1494]: Removed session 10. Jan 29 16:26:11.231883 containerd[1509]: time="2025-01-29T16:26:11.231801088Z" level=info msg="StartContainer for \"0343f05559f5819e46f364d6e04facc38ba1a657f7693d235433d5ff3cce8171\" returns successfully" Jan 29 16:26:11.248021 kubelet[2623]: E0129 16:26:11.246642 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:11.249379 kubelet[2623]: I0129 16:26:11.249344 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723" Jan 29 16:26:11.250660 containerd[1509]: time="2025-01-29T16:26:11.250218630Z" level=info msg="StopPodSandbox for \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\"" Jan 29 16:26:11.250660 containerd[1509]: time="2025-01-29T16:26:11.250571365Z" level=info msg="Ensure that sandbox 6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723 in task-service has been cleanup successfully" Jan 29 16:26:11.251075 containerd[1509]: time="2025-01-29T16:26:11.251019642Z" level=info msg="TearDown network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" successfully" Jan 29 16:26:11.251075 containerd[1509]: time="2025-01-29T16:26:11.251037797Z" level=info msg="StopPodSandbox for \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" returns successfully" Jan 29 16:26:11.251967 containerd[1509]: time="2025-01-29T16:26:11.251550338Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\"" Jan 29 16:26:11.251967 containerd[1509]: time="2025-01-29T16:26:11.251662742Z" level=info msg="TearDown network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" successfully" Jan 29 16:26:11.251967 containerd[1509]: time="2025-01-29T16:26:11.251676338Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" returns successfully" Jan 29 16:26:11.252145 containerd[1509]: time="2025-01-29T16:26:11.252109216Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" Jan 29 16:26:11.252246 containerd[1509]: time="2025-01-29T16:26:11.252209297Z" level=info msg="TearDown network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" successfully" Jan 29 16:26:11.252246 containerd[1509]: time="2025-01-29T16:26:11.252235337Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" returns successfully" Jan 29 16:26:11.252844 containerd[1509]: time="2025-01-29T16:26:11.252810617Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:11.253229 containerd[1509]: time="2025-01-29T16:26:11.253037852Z" level=info msg="TearDown network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" successfully" Jan 29 16:26:11.253229 containerd[1509]: time="2025-01-29T16:26:11.253056839Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" returns successfully" Jan 29 16:26:11.256115 containerd[1509]: time="2025-01-29T16:26:11.253926562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:4,}" Jan 29 16:26:11.256186 kubelet[2623]: I0129 16:26:11.256010 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f" Jan 29 16:26:11.257088 containerd[1509]: time="2025-01-29T16:26:11.256871353Z" level=info msg="StopPodSandbox for \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\"" Jan 29 16:26:11.257579 containerd[1509]: time="2025-01-29T16:26:11.257462484Z" level=info msg="Ensure that sandbox 4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f in task-service has been cleanup successfully" Jan 29 16:26:11.258555 containerd[1509]: time="2025-01-29T16:26:11.258499107Z" level=info msg="TearDown network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" successfully" Jan 29 16:26:11.258695 containerd[1509]: time="2025-01-29T16:26:11.258553651Z" level=info msg="StopPodSandbox for \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" returns successfully" Jan 29 16:26:11.259285 containerd[1509]: time="2025-01-29T16:26:11.259256435Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\"" Jan 29 16:26:11.263917 kubelet[2623]: I0129 16:26:11.263511 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73" Jan 29 16:26:11.264944 containerd[1509]: time="2025-01-29T16:26:11.264162748Z" level=info msg="StopPodSandbox for \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\"" Jan 29 16:26:11.264944 containerd[1509]: time="2025-01-29T16:26:11.264757825Z" level=info msg="Ensure that sandbox 91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73 in task-service has been cleanup successfully" Jan 29 16:26:11.265854 containerd[1509]: time="2025-01-29T16:26:11.265771925Z" level=info msg="TearDown network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" successfully" Jan 29 16:26:11.265854 containerd[1509]: time="2025-01-29T16:26:11.265816501Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" returns successfully" Jan 29 16:26:11.266454 containerd[1509]: time="2025-01-29T16:26:11.266211026Z" level=info msg="TearDown network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" successfully" Jan 29 16:26:11.266454 containerd[1509]: time="2025-01-29T16:26:11.266229560Z" level=info msg="StopPodSandbox for \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" returns successfully" Jan 29 16:26:11.267545 containerd[1509]: time="2025-01-29T16:26:11.267497225Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" Jan 29 16:26:11.267641 containerd[1509]: time="2025-01-29T16:26:11.267611573Z" level=info msg="TearDown network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" successfully" Jan 29 16:26:11.267641 containerd[1509]: time="2025-01-29T16:26:11.267635409Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" returns successfully" Jan 29 16:26:11.268455 containerd[1509]: time="2025-01-29T16:26:11.268177617Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:11.268455 containerd[1509]: time="2025-01-29T16:26:11.268274060Z" level=info msg="TearDown network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" successfully" Jan 29 16:26:11.268455 containerd[1509]: time="2025-01-29T16:26:11.268287156Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" returns successfully" Jan 29 16:26:11.269349 containerd[1509]: time="2025-01-29T16:26:11.269001291Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:11.269349 containerd[1509]: time="2025-01-29T16:26:11.269106513Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:11.269349 containerd[1509]: time="2025-01-29T16:26:11.269123175Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:11.269349 containerd[1509]: time="2025-01-29T16:26:11.269284944Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\"" Jan 29 16:26:11.270167 containerd[1509]: time="2025-01-29T16:26:11.269772336Z" level=info msg="TearDown network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" successfully" Jan 29 16:26:11.270167 containerd[1509]: time="2025-01-29T16:26:11.269793076Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" returns successfully" Jan 29 16:26:11.270244 kubelet[2623]: E0129 16:26:11.269950 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:11.272032 containerd[1509]: time="2025-01-29T16:26:11.271779584Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" Jan 29 16:26:11.272032 containerd[1509]: time="2025-01-29T16:26:11.271952776Z" level=info msg="TearDown network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" successfully" Jan 29 16:26:11.272032 containerd[1509]: time="2025-01-29T16:26:11.271969167Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" returns successfully" Jan 29 16:26:11.272830 kubelet[2623]: I0129 16:26:11.272388 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683" Jan 29 16:26:11.273303 containerd[1509]: time="2025-01-29T16:26:11.273003075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:5,}" Jan 29 16:26:11.273623 containerd[1509]: time="2025-01-29T16:26:11.273603964Z" level=info msg="StopPodSandbox for \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\"" Jan 29 16:26:11.273924 containerd[1509]: time="2025-01-29T16:26:11.273904779Z" level=info msg="Ensure that sandbox 6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683 in task-service has been cleanup successfully" Jan 29 16:26:11.274568 containerd[1509]: time="2025-01-29T16:26:11.274103900Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:11.274836 containerd[1509]: time="2025-01-29T16:26:11.274791384Z" level=info msg="TearDown network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" successfully" Jan 29 16:26:11.274836 containerd[1509]: time="2025-01-29T16:26:11.274831762Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" returns successfully" Jan 29 16:26:11.275277 containerd[1509]: time="2025-01-29T16:26:11.275207380Z" level=info msg="TearDown network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\" successfully" Jan 29 16:26:11.275324 containerd[1509]: time="2025-01-29T16:26:11.275277123Z" level=info msg="StopPodSandbox for \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\" returns successfully" Jan 29 16:26:11.275664 containerd[1509]: time="2025-01-29T16:26:11.275502595Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:11.275664 containerd[1509]: time="2025-01-29T16:26:11.275607686Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:11.275664 containerd[1509]: time="2025-01-29T16:26:11.275620640Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:11.277097 containerd[1509]: time="2025-01-29T16:26:11.276852817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:5,}" Jan 29 16:26:11.277097 containerd[1509]: time="2025-01-29T16:26:11.276954462Z" level=info msg="StopPodSandbox for \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\"" Jan 29 16:26:11.277097 containerd[1509]: time="2025-01-29T16:26:11.277041518Z" level=info msg="TearDown network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" successfully" Jan 29 16:26:11.277097 containerd[1509]: time="2025-01-29T16:26:11.277053841Z" level=info msg="StopPodSandbox for \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" returns successfully" Jan 29 16:26:11.277756 containerd[1509]: time="2025-01-29T16:26:11.277607059Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\"" Jan 29 16:26:11.277756 containerd[1509]: time="2025-01-29T16:26:11.277700257Z" level=info msg="TearDown network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" successfully" Jan 29 16:26:11.277756 containerd[1509]: time="2025-01-29T16:26:11.277712341Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" returns successfully" Jan 29 16:26:11.278681 containerd[1509]: time="2025-01-29T16:26:11.278446164Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" Jan 29 16:26:11.278681 containerd[1509]: time="2025-01-29T16:26:11.278534543Z" level=info msg="TearDown network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" successfully" Jan 29 16:26:11.278681 containerd[1509]: time="2025-01-29T16:26:11.278545334Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" returns successfully" Jan 29 16:26:11.279377 kubelet[2623]: I0129 16:26:11.278927 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110" Jan 29 16:26:11.279975 containerd[1509]: time="2025-01-29T16:26:11.279601514Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:11.279975 containerd[1509]: time="2025-01-29T16:26:11.279745929Z" level=info msg="TearDown network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" successfully" Jan 29 16:26:11.279975 containerd[1509]: time="2025-01-29T16:26:11.279757763Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" returns successfully" Jan 29 16:26:11.279975 containerd[1509]: time="2025-01-29T16:26:11.279961752Z" level=info msg="StopPodSandbox for \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\"" Jan 29 16:26:11.282959 kubelet[2623]: I0129 16:26:11.282226 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tg9rm" podStartSLOduration=1.060642319 podStartE2EDuration="22.282211285s" podCreationTimestamp="2025-01-29 16:25:49 +0000 UTC" firstStartedPulling="2025-01-29 16:25:49.7496174 +0000 UTC m=+12.359114110" lastFinishedPulling="2025-01-29 16:26:10.971186366 +0000 UTC m=+33.580683076" observedRunningTime="2025-01-29 16:26:11.281075393 +0000 UTC m=+33.890572103" watchObservedRunningTime="2025-01-29 16:26:11.282211285 +0000 UTC m=+33.891707995" Jan 29 16:26:11.283196 containerd[1509]: time="2025-01-29T16:26:11.282283864Z" level=info msg="Ensure that sandbox 4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110 in task-service has been cleanup successfully" Jan 29 16:26:11.289197 containerd[1509]: time="2025-01-29T16:26:11.289148552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:5,}" Jan 29 16:26:11.291953 containerd[1509]: time="2025-01-29T16:26:11.289917081Z" level=info msg="TearDown network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\" successfully" Jan 29 16:26:11.292100 containerd[1509]: time="2025-01-29T16:26:11.291958245Z" level=info msg="StopPodSandbox for \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\" returns successfully" Jan 29 16:26:11.334278 containerd[1509]: time="2025-01-29T16:26:11.334141809Z" level=info msg="StopPodSandbox for \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\"" Jan 29 16:26:11.335309 containerd[1509]: time="2025-01-29T16:26:11.335276409Z" level=info msg="TearDown network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" successfully" Jan 29 16:26:11.335309 containerd[1509]: time="2025-01-29T16:26:11.335299844Z" level=info msg="StopPodSandbox for \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" returns successfully" Jan 29 16:26:11.337850 kubelet[2623]: I0129 16:26:11.337821 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a" Jan 29 16:26:11.340561 containerd[1509]: time="2025-01-29T16:26:11.340093672Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\"" Jan 29 16:26:11.361879 containerd[1509]: time="2025-01-29T16:26:11.340370932Z" level=info msg="TearDown network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" successfully" Jan 29 16:26:11.361879 containerd[1509]: time="2025-01-29T16:26:11.361796397Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" returns successfully" Jan 29 16:26:11.362032 containerd[1509]: time="2025-01-29T16:26:11.352916524Z" level=info msg="StopPodSandbox for \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\"" Jan 29 16:26:11.362488 containerd[1509]: time="2025-01-29T16:26:11.362424858Z" level=info msg="Ensure that sandbox ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a in task-service has been cleanup successfully" Jan 29 16:26:11.363942 containerd[1509]: time="2025-01-29T16:26:11.363858681Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" Jan 29 16:26:11.364083 containerd[1509]: time="2025-01-29T16:26:11.364036691Z" level=info msg="TearDown network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" successfully" Jan 29 16:26:11.364083 containerd[1509]: time="2025-01-29T16:26:11.364063893Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" returns successfully" Jan 29 16:26:11.364918 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:26:11.364988 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jan 29 16:26:11.365012 containerd[1509]: time="2025-01-29T16:26:11.364733724Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:11.365012 containerd[1509]: time="2025-01-29T16:26:11.364986608Z" level=info msg="TearDown network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\" successfully" Jan 29 16:26:11.365012 containerd[1509]: time="2025-01-29T16:26:11.365003871Z" level=info msg="StopPodSandbox for \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\" returns successfully" Jan 29 16:26:11.365791 containerd[1509]: time="2025-01-29T16:26:11.365761159Z" level=info msg="StopPodSandbox for \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\"" Jan 29 16:26:11.366970 containerd[1509]: time="2025-01-29T16:26:11.366750561Z" level=info msg="TearDown network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" successfully" Jan 29 16:26:11.366970 containerd[1509]: time="2025-01-29T16:26:11.366772273Z" level=info msg="StopPodSandbox for \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" returns successfully" Jan 29 16:26:11.367248 containerd[1509]: time="2025-01-29T16:26:11.367217345Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\"" Jan 29 16:26:11.367384 containerd[1509]: time="2025-01-29T16:26:11.367368112Z" level=info msg="TearDown network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" successfully" Jan 29 16:26:11.367666 containerd[1509]: time="2025-01-29T16:26:11.367640343Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" returns successfully" Jan 29 16:26:11.369299 containerd[1509]: time="2025-01-29T16:26:11.369124852Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" Jan 29 16:26:11.369299 containerd[1509]: time="2025-01-29T16:26:11.369226316Z" level=info msg="TearDown network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" successfully" Jan 29 16:26:11.369299 containerd[1509]: time="2025-01-29T16:26:11.369240013Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" returns successfully" Jan 29 16:26:11.372461 containerd[1509]: time="2025-01-29T16:26:11.371807062Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:11.373467 containerd[1509]: time="2025-01-29T16:26:11.373328582Z" level=info msg="TearDown network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" successfully" Jan 29 16:26:11.373467 containerd[1509]: time="2025-01-29T16:26:11.373369970Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" returns successfully" Jan 29 16:26:11.376634 kubelet[2623]: E0129 16:26:11.375811 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:11.377017 containerd[1509]: time="2025-01-29T16:26:11.376956930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:5,}" Jan 29 16:26:11.389308 containerd[1509]: time="2025-01-29T16:26:11.389087559Z" level=info msg="TearDown network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" successfully" Jan 29 16:26:11.389308 containerd[1509]: time="2025-01-29T16:26:11.389171019Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" returns successfully" Jan 29 16:26:11.391275 containerd[1509]: time="2025-01-29T16:26:11.390986431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:5,}" Jan 29 16:26:11.462432 containerd[1509]: time="2025-01-29T16:26:11.462223027Z" level=error msg="Failed to destroy network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.463101 containerd[1509]: time="2025-01-29T16:26:11.462986367Z" level=error msg="encountered an error cleaning up failed sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.463360 containerd[1509]: time="2025-01-29T16:26:11.463250733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.463851 kubelet[2623]: E0129 16:26:11.463799 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.464310 kubelet[2623]: E0129 16:26:11.464060 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:11.464310 kubelet[2623]: E0129 16:26:11.464103 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjx6x" Jan 29 16:26:11.464310 kubelet[2623]: E0129 16:26:11.464161 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mjx6x_calico-system(9cc09215-26d9-4b38-816c-abf4c3c659ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mjx6x" podUID="9cc09215-26d9-4b38-816c-abf4c3c659ad" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.577 [INFO][4674] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.578 [INFO][4674] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" iface="eth0" netns="/var/run/netns/cni-fcd956c3-5bcd-e596-6517-da0fa539d32c" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.578 [INFO][4674] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" iface="eth0" netns="/var/run/netns/cni-fcd956c3-5bcd-e596-6517-da0fa539d32c" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.580 [INFO][4674] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" iface="eth0" netns="/var/run/netns/cni-fcd956c3-5bcd-e596-6517-da0fa539d32c" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.580 [INFO][4674] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.580 [INFO][4674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.805 [INFO][4723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" HandleID="k8s-pod-network.8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" Workload="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.806 [INFO][4723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.807 [INFO][4723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.826 [WARNING][4723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" HandleID="k8s-pod-network.8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" Workload="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.826 [INFO][4723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" HandleID="k8s-pod-network.8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" Workload="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.830 [INFO][4723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:11.841542 containerd[1509]: 2025-01-29 16:26:11.836 [INFO][4674] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.577 [INFO][4645] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.579 [INFO][4645] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" iface="eth0" netns="/var/run/netns/cni-016293df-47db-9c70-4b20-02d457295c6e" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.580 [INFO][4645] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" iface="eth0" netns="/var/run/netns/cni-016293df-47db-9c70-4b20-02d457295c6e" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.580 [INFO][4645] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" iface="eth0" netns="/var/run/netns/cni-016293df-47db-9c70-4b20-02d457295c6e" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.580 [INFO][4645] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.580 [INFO][4645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.804 [INFO][4724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" HandleID="k8s-pod-network.bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" Workload="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.806 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.830 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.839 [WARNING][4724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" HandleID="k8s-pod-network.bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" Workload="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.839 [INFO][4724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" HandleID="k8s-pod-network.bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" Workload="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.844 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:11.857474 containerd[1509]: 2025-01-29 16:26:11.848 [INFO][4645] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc" Jan 29 16:26:11.866099 systemd[1]: run-netns-cni\x2d07f30043\x2d9d8c\x2ddf1f\x2d401d\x2d5a9c10838c37.mount: Deactivated successfully. Jan 29 16:26:11.866248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a-shm.mount: Deactivated successfully. Jan 29 16:26:11.866360 systemd[1]: run-netns-cni\x2dcf30d964\x2d825c\x2d6869\x2d00c4\x2daaac0a503160.mount: Deactivated successfully. Jan 29 16:26:11.866498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73-shm.mount: Deactivated successfully. Jan 29 16:26:11.866600 systemd[1]: run-netns-cni\x2dbd874b9d\x2d8f36\x2d2ec8\x2db1e6\x2d5f208a2a32ad.mount: Deactivated successfully. Jan 29 16:26:11.866696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723-shm.mount: Deactivated successfully. Jan 29 16:26:11.866800 systemd[1]: run-netns-cni\x2de6710163\x2d595c\x2dd7d9\x2de9f5\x2dba3a78dea84d.mount: Deactivated successfully. Jan 29 16:26:11.866897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683-shm.mount: Deactivated successfully. Jan 29 16:26:11.901201 containerd[1509]: time="2025-01-29T16:26:11.901128890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.901761 kubelet[2623]: E0129 16:26:11.901661 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.901864 kubelet[2623]: E0129 16:26:11.901777 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:11.901864 kubelet[2623]: E0129 16:26:11.901828 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6kc2r" Jan 29 16:26:11.901936 kubelet[2623]: E0129 16:26:11.901881 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6kc2r_kube-system(d39cd512-c288-44a0-b875-c359ef74dd3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcff545a9835b7614d8abb87752053a0d002f76813b6605dfaede46c59a592fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6kc2r" podUID="d39cd512-c288-44a0-b875-c359ef74dd3f" Jan 29 16:26:11.905506 containerd[1509]: time="2025-01-29T16:26:11.904297901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.905739 kubelet[2623]: E0129 16:26:11.904620 2623 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:11.905739 kubelet[2623]: E0129 16:26:11.904677 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:11.905739 kubelet[2623]: E0129 16:26:11.904704 2623 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" Jan 29 16:26:11.905861 kubelet[2623]: E0129 16:26:11.904760 2623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5947747589-tzh8v_calico-apiserver(d3ce4654-53de-4d6a-8744-f657f07eba4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8aca69ced3aaaacab31b7e52da7f1096b9e5887731dfe538031181f2e94a13d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" podUID="d3ce4654-53de-4d6a-8744-f657f07eba4f" Jan 29 16:26:11.988274 systemd-networkd[1445]: calie87059a3460: Link UP Jan 29 16:26:11.988795 systemd-networkd[1445]: calie87059a3460: Gained carrier Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.567 [INFO][4683] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.598 [INFO][4683] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--llv2c-eth0 coredns-6f6b679f8f- kube-system 5b88c73e-075c-4156-a283-4de15bccf36a 695 0 2025-01-29 16:25:42 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-llv2c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie87059a3460 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.599 [INFO][4683] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.804 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" HandleID="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Workload="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.828 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" HandleID="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Workload="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036a880), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-llv2c", "timestamp":"2025-01-29 16:26:11.804724442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.828 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.847 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.848 [INFO][4730] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.852 [INFO][4730] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.925 [INFO][4730] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.938 [INFO][4730] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.942 [INFO][4730] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.948 [INFO][4730] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.949 [INFO][4730] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.952 [INFO][4730] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0 Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.960 [INFO][4730] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.970 [INFO][4730] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.970 [INFO][4730] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" host="localhost" Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.970 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:12.014519 containerd[1509]: 2025-01-29 16:26:11.970 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" HandleID="k8s-pod-network.e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Workload="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" Jan 29 16:26:12.015349 containerd[1509]: 2025-01-29 16:26:11.974 [INFO][4683] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--llv2c-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5b88c73e-075c-4156-a283-4de15bccf36a", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-llv2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie87059a3460", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.015349 containerd[1509]: 2025-01-29 16:26:11.974 [INFO][4683] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" Jan 29 16:26:12.015349 containerd[1509]: 2025-01-29 16:26:11.974 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie87059a3460 ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" Jan 29 16:26:12.015349 containerd[1509]: 2025-01-29 16:26:11.989 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" Jan 29 16:26:12.015349 containerd[1509]: 2025-01-29 16:26:11.989 [INFO][4683] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--llv2c-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5b88c73e-075c-4156-a283-4de15bccf36a", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0", Pod:"coredns-6f6b679f8f-llv2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie87059a3460", MAC:"56:0d:14:5f:de:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.015349 containerd[1509]: 2025-01-29 16:26:12.005 [INFO][4683] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-llv2c" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--llv2c-eth0" Jan 29 16:26:12.090306 systemd-networkd[1445]: calic1367680068: Link UP Jan 29 16:26:12.093190 systemd-networkd[1445]: calic1367680068: Gained carrier Jan 29 16:26:12.121593 containerd[1509]: time="2025-01-29T16:26:12.118115511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:12.121593 containerd[1509]: time="2025-01-29T16:26:12.118211453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:12.121593 containerd[1509]: time="2025-01-29T16:26:12.118239006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.582 [INFO][4659] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.600 [INFO][4659] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5947747589--n86vd-eth0 calico-apiserver-5947747589- calico-apiserver 5f804998-68b0-408c-beb0-2887c4ad4908 701 0 2025-01-29 16:25:49 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5947747589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5947747589-n86vd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1367680068 [] []}} ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.600 [INFO][4659] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.805 [INFO][4729] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" HandleID="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Workload="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.829 [INFO][4729] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" HandleID="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Workload="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00018a240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5947747589-n86vd", "timestamp":"2025-01-29 16:26:11.805193178 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.829 [INFO][4729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.970 [INFO][4729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.970 [INFO][4729] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:11.978 [INFO][4729] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.026 [INFO][4729] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.036 [INFO][4729] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.044 [INFO][4729] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.049 [INFO][4729] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.049 [INFO][4729] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.052 [INFO][4729] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594 Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.063 [INFO][4729] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.082 [INFO][4729] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.082 [INFO][4729] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" host="localhost" Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.082 [INFO][4729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:12.123792 containerd[1509]: 2025-01-29 16:26:12.082 [INFO][4729] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" HandleID="k8s-pod-network.9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Workload="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" Jan 29 16:26:12.124599 containerd[1509]: 2025-01-29 16:26:12.086 [INFO][4659] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5947747589--n86vd-eth0", GenerateName:"calico-apiserver-5947747589-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f804998-68b0-408c-beb0-2887c4ad4908", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5947747589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5947747589-n86vd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1367680068", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.124599 containerd[1509]: 2025-01-29 16:26:12.087 [INFO][4659] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" Jan 29 16:26:12.124599 containerd[1509]: 2025-01-29 16:26:12.087 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1367680068 ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" Jan 29 16:26:12.124599 containerd[1509]: 2025-01-29 16:26:12.092 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" Jan 29 16:26:12.124599 containerd[1509]: 2025-01-29 16:26:12.093 [INFO][4659] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5947747589--n86vd-eth0", GenerateName:"calico-apiserver-5947747589-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f804998-68b0-408c-beb0-2887c4ad4908", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5947747589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594", Pod:"calico-apiserver-5947747589-n86vd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1367680068", MAC:"b6:1a:bc:70:48:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.124599 containerd[1509]: 2025-01-29 16:26:12.115 [INFO][4659] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-n86vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--n86vd-eth0" Jan 29 16:26:12.124599 containerd[1509]: time="2025-01-29T16:26:12.123178037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.161031 systemd[1]: Started cri-containerd-e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0.scope - libcontainer container e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0. Jan 29 16:26:12.175738 containerd[1509]: time="2025-01-29T16:26:12.173827902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:12.175738 containerd[1509]: time="2025-01-29T16:26:12.175358018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:12.175738 containerd[1509]: time="2025-01-29T16:26:12.175376873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.175738 containerd[1509]: time="2025-01-29T16:26:12.175533333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.184255 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:12.190933 systemd-networkd[1445]: calic935704bf7d: Link UP Jan 29 16:26:12.192068 systemd-networkd[1445]: calic935704bf7d: Gained carrier Jan 29 16:26:12.213005 systemd[1]: Started cri-containerd-9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594.scope - libcontainer container 9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594. Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:11.542 [INFO][4696] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:11.561 [INFO][4696] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0 calico-kube-controllers-66b4c55cd5- calico-system db40db35-a526-4e56-80d1-8bc8cd956a1c 699 0 2025-01-29 16:25:49 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66b4c55cd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66b4c55cd5-pmg6b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic935704bf7d [] []}} ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:11.561 [INFO][4696] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:11.805 [INFO][4727] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" HandleID="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Workload="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:11.828 [INFO][4727] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" HandleID="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Workload="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66b4c55cd5-pmg6b", "timestamp":"2025-01-29 16:26:11.805072317 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:11.829 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.082 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.082 [INFO][4727] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.089 [INFO][4727] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.126 [INFO][4727] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.147 [INFO][4727] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.149 [INFO][4727] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.153 [INFO][4727] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.154 [INFO][4727] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.158 [INFO][4727] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9 Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.167 [INFO][4727] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.181 [INFO][4727] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.181 [INFO][4727] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" host="localhost" Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.181 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:12.223290 containerd[1509]: 2025-01-29 16:26:12.181 [INFO][4727] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" HandleID="k8s-pod-network.ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Workload="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" Jan 29 16:26:12.224353 containerd[1509]: 2025-01-29 16:26:12.186 [INFO][4696] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0", GenerateName:"calico-kube-controllers-66b4c55cd5-", Namespace:"calico-system", SelfLink:"", UID:"db40db35-a526-4e56-80d1-8bc8cd956a1c", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66b4c55cd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66b4c55cd5-pmg6b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic935704bf7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.224353 containerd[1509]: 2025-01-29 16:26:12.187 [INFO][4696] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" Jan 29 16:26:12.224353 containerd[1509]: 2025-01-29 16:26:12.187 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic935704bf7d ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" Jan 29 16:26:12.224353 containerd[1509]: 2025-01-29 16:26:12.192 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" Jan 29 16:26:12.224353 containerd[1509]: 2025-01-29 16:26:12.192 [INFO][4696] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0", GenerateName:"calico-kube-controllers-66b4c55cd5-", Namespace:"calico-system", SelfLink:"", UID:"db40db35-a526-4e56-80d1-8bc8cd956a1c", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66b4c55cd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9", Pod:"calico-kube-controllers-66b4c55cd5-pmg6b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic935704bf7d", MAC:"de:1a:a3:1f:70:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.224353 containerd[1509]: 2025-01-29 16:26:12.215 [INFO][4696] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9" Namespace="calico-system" Pod="calico-kube-controllers-66b4c55cd5-pmg6b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66b4c55cd5--pmg6b-eth0" Jan 29 16:26:12.235265 containerd[1509]: time="2025-01-29T16:26:12.234759761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-llv2c,Uid:5b88c73e-075c-4156-a283-4de15bccf36a,Namespace:kube-system,Attempt:5,} returns sandbox id \"e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0\"" Jan 29 16:26:12.236617 kubelet[2623]: E0129 16:26:12.236565 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:12.239336 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:12.239881 containerd[1509]: time="2025-01-29T16:26:12.239675458Z" level=info msg="CreateContainer within sandbox \"e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:26:12.270377 containerd[1509]: time="2025-01-29T16:26:12.270201777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:12.270659 containerd[1509]: time="2025-01-29T16:26:12.270579800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:12.270747 containerd[1509]: time="2025-01-29T16:26:12.270645896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.271030 containerd[1509]: time="2025-01-29T16:26:12.270994162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.281766 containerd[1509]: time="2025-01-29T16:26:12.281645381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-n86vd,Uid:5f804998-68b0-408c-beb0-2887c4ad4908,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594\"" Jan 29 16:26:12.284749 containerd[1509]: time="2025-01-29T16:26:12.284498525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 16:26:12.292594 containerd[1509]: time="2025-01-29T16:26:12.292551538Z" level=info msg="CreateContainer within sandbox \"e1d79b04a9899304c74d428d900934aa213eec45d9d967bd7afad0ea8126f2c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ee07a0b4ceecef0e431ac0def95ef44bfbbecb7157aa26303fd2a7a33a7f4bb\"" Jan 29 16:26:12.295604 containerd[1509]: time="2025-01-29T16:26:12.295487530Z" level=info msg="StartContainer for \"0ee07a0b4ceecef0e431ac0def95ef44bfbbecb7157aa26303fd2a7a33a7f4bb\"" Jan 29 16:26:12.297776 systemd[1]: Started cri-containerd-ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9.scope - libcontainer container ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9. Jan 29 16:26:12.319510 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:12.339718 systemd[1]: Started cri-containerd-0ee07a0b4ceecef0e431ac0def95ef44bfbbecb7157aa26303fd2a7a33a7f4bb.scope - libcontainer container 0ee07a0b4ceecef0e431ac0def95ef44bfbbecb7157aa26303fd2a7a33a7f4bb. Jan 29 16:26:12.368457 kubelet[2623]: I0129 16:26:12.366783 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980" Jan 29 16:26:12.370264 containerd[1509]: time="2025-01-29T16:26:12.370226120Z" level=info msg="StopPodSandbox for \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\"" Jan 29 16:26:12.372353 containerd[1509]: time="2025-01-29T16:26:12.371032332Z" level=info msg="Ensure that sandbox 0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980 in task-service has been cleanup successfully" Jan 29 16:26:12.374591 containerd[1509]: time="2025-01-29T16:26:12.372922564Z" level=info msg="TearDown network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\" successfully" Jan 29 16:26:12.374755 containerd[1509]: time="2025-01-29T16:26:12.374729588Z" level=info msg="StopPodSandbox for \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\" returns successfully" Jan 29 16:26:12.376065 containerd[1509]: time="2025-01-29T16:26:12.375773714Z" level=info msg="StopPodSandbox for \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\"" Jan 29 16:26:12.376065 containerd[1509]: time="2025-01-29T16:26:12.375913451Z" level=info msg="TearDown network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" successfully" Jan 29 16:26:12.376065 containerd[1509]: time="2025-01-29T16:26:12.375929302Z" level=info msg="StopPodSandbox for \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" returns successfully" Jan 29 16:26:12.378415 containerd[1509]: time="2025-01-29T16:26:12.378317076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66b4c55cd5-pmg6b,Uid:db40db35-a526-4e56-80d1-8bc8cd956a1c,Namespace:calico-system,Attempt:5,} returns sandbox id \"ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9\"" Jan 29 16:26:12.380642 containerd[1509]: time="2025-01-29T16:26:12.378360840Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\"" Jan 29 16:26:12.380642 containerd[1509]: time="2025-01-29T16:26:12.379689219Z" level=info msg="TearDown network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" successfully" Jan 29 16:26:12.380642 containerd[1509]: time="2025-01-29T16:26:12.379709618Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" returns successfully" Jan 29 16:26:12.382534 containerd[1509]: time="2025-01-29T16:26:12.382385632Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" Jan 29 16:26:12.382723 containerd[1509]: time="2025-01-29T16:26:12.382560247Z" level=info msg="TearDown network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" successfully" Jan 29 16:26:12.382723 containerd[1509]: time="2025-01-29T16:26:12.382573452Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" returns successfully" Jan 29 16:26:12.384193 containerd[1509]: time="2025-01-29T16:26:12.383255215Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:12.384193 containerd[1509]: time="2025-01-29T16:26:12.383361759Z" level=info msg="TearDown network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" successfully" Jan 29 16:26:12.384193 containerd[1509]: time="2025-01-29T16:26:12.383381276Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" returns successfully" Jan 29 16:26:12.384193 containerd[1509]: time="2025-01-29T16:26:12.383940686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:5,}" Jan 29 16:26:12.411379 containerd[1509]: time="2025-01-29T16:26:12.411307004Z" level=info msg="StopPodSandbox for \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\"" Jan 29 16:26:12.413840 containerd[1509]: time="2025-01-29T16:26:12.413740546Z" level=info msg="TearDown network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" successfully" Jan 29 16:26:12.413840 containerd[1509]: time="2025-01-29T16:26:12.413826160Z" level=info msg="StopPodSandbox for \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" returns successfully" Jan 29 16:26:12.415166 containerd[1509]: time="2025-01-29T16:26:12.414957814Z" level=info msg="StopPodSandbox for \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\"" Jan 29 16:26:12.415543 containerd[1509]: time="2025-01-29T16:26:12.415387124Z" level=info msg="TearDown network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" successfully" Jan 29 16:26:12.415691 containerd[1509]: time="2025-01-29T16:26:12.415623205Z" level=info msg="StopPodSandbox for \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" returns successfully" Jan 29 16:26:12.416687 containerd[1509]: time="2025-01-29T16:26:12.416625070Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\"" Jan 29 16:26:12.417147 containerd[1509]: time="2025-01-29T16:26:12.417017771Z" level=info msg="TearDown network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" successfully" Jan 29 16:26:12.417147 containerd[1509]: time="2025-01-29T16:26:12.417089268Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" returns successfully" Jan 29 16:26:12.418040 containerd[1509]: time="2025-01-29T16:26:12.417836867Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\"" Jan 29 16:26:12.418040 containerd[1509]: time="2025-01-29T16:26:12.417943851Z" level=info msg="TearDown network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" successfully" Jan 29 16:26:12.418040 containerd[1509]: time="2025-01-29T16:26:12.417959280Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" returns successfully" Jan 29 16:26:12.418327 containerd[1509]: time="2025-01-29T16:26:12.418147751Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" Jan 29 16:26:12.418700 containerd[1509]: time="2025-01-29T16:26:12.418677864Z" level=info msg="TearDown network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" successfully" Jan 29 16:26:12.419095 containerd[1509]: time="2025-01-29T16:26:12.419058772Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" returns successfully" Jan 29 16:26:12.419231 containerd[1509]: time="2025-01-29T16:26:12.419204200Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" Jan 29 16:26:12.421881 containerd[1509]: time="2025-01-29T16:26:12.420936501Z" level=info msg="TearDown network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" successfully" Jan 29 16:26:12.421881 containerd[1509]: time="2025-01-29T16:26:12.420959836Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" returns successfully" Jan 29 16:26:12.423944 containerd[1509]: time="2025-01-29T16:26:12.422223892Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:12.423944 containerd[1509]: time="2025-01-29T16:26:12.422328311Z" level=info msg="TearDown network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" successfully" Jan 29 16:26:12.423944 containerd[1509]: time="2025-01-29T16:26:12.422341767Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" returns successfully" Jan 29 16:26:12.423944 containerd[1509]: time="2025-01-29T16:26:12.422447088Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:12.423944 containerd[1509]: time="2025-01-29T16:26:12.422529566Z" level=info msg="TearDown network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" successfully" Jan 29 16:26:12.423944 containerd[1509]: time="2025-01-29T16:26:12.422542461Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" returns successfully" Jan 29 16:26:12.424820 containerd[1509]: time="2025-01-29T16:26:12.424534909Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:12.424820 containerd[1509]: time="2025-01-29T16:26:12.424642706Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:12.424820 containerd[1509]: time="2025-01-29T16:26:12.424658926Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:12.424820 containerd[1509]: time="2025-01-29T16:26:12.424718801Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:12.426780 kubelet[2623]: E0129 16:26:12.426698 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:12.427207 containerd[1509]: time="2025-01-29T16:26:12.427153585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:5,}" Jan 29 16:26:12.427382 containerd[1509]: time="2025-01-29T16:26:12.427337346Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:12.427487 containerd[1509]: time="2025-01-29T16:26:12.427470010Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:12.429311 containerd[1509]: time="2025-01-29T16:26:12.429263508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:5,}" Jan 29 16:26:12.429574 containerd[1509]: time="2025-01-29T16:26:12.429430507Z" level=info msg="StartContainer for \"0ee07a0b4ceecef0e431ac0def95ef44bfbbecb7157aa26303fd2a7a33a7f4bb\" returns successfully" Jan 29 16:26:12.623992 systemd-networkd[1445]: calid1ac614fb32: Link UP Jan 29 16:26:12.624817 systemd-networkd[1445]: calid1ac614fb32: Gained carrier Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.463 [INFO][4960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.487 [INFO][4960] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mjx6x-eth0 csi-node-driver- calico-system 9cc09215-26d9-4b38-816c-abf4c3c659ad 593 0 2025-01-29 16:25:49 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mjx6x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid1ac614fb32 [] []}} ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.487 [INFO][4960] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-eth0" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.545 [INFO][4999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" HandleID="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Workload="localhost-k8s-csi--node--driver--mjx6x-eth0" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.558 [INFO][4999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" HandleID="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Workload="localhost-k8s-csi--node--driver--mjx6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c6770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mjx6x", "timestamp":"2025-01-29 16:26:12.545036734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.558 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.558 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.558 [INFO][4999] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.563 [INFO][4999] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.571 [INFO][4999] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.580 [INFO][4999] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.584 [INFO][4999] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.587 [INFO][4999] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.587 [INFO][4999] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.590 [INFO][4999] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659 Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.601 [INFO][4999] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.616 [INFO][4999] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.617 [INFO][4999] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" host="localhost" Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.617 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:12.643484 containerd[1509]: 2025-01-29 16:26:12.617 [INFO][4999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" HandleID="k8s-pod-network.cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Workload="localhost-k8s-csi--node--driver--mjx6x-eth0" Jan 29 16:26:12.644368 containerd[1509]: 2025-01-29 16:26:12.620 [INFO][4960] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mjx6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cc09215-26d9-4b38-816c-abf4c3c659ad", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mjx6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1ac614fb32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.644368 containerd[1509]: 2025-01-29 16:26:12.621 [INFO][4960] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-eth0" Jan 29 16:26:12.644368 containerd[1509]: 2025-01-29 16:26:12.621 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1ac614fb32 ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-eth0" Jan 29 16:26:12.644368 containerd[1509]: 2025-01-29 16:26:12.624 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-eth0" Jan 29 16:26:12.644368 containerd[1509]: 2025-01-29 16:26:12.624 [INFO][4960] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mjx6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cc09215-26d9-4b38-816c-abf4c3c659ad", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659", Pod:"csi-node-driver-mjx6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1ac614fb32", MAC:"fa:65:33:7f:88:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.644368 containerd[1509]: 2025-01-29 16:26:12.637 [INFO][4960] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659" Namespace="calico-system" Pod="csi-node-driver-mjx6x" WorkloadEndpoint="localhost-k8s-csi--node--driver--mjx6x-eth0" Jan 29 16:26:12.703346 containerd[1509]: time="2025-01-29T16:26:12.700936090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:12.703346 containerd[1509]: time="2025-01-29T16:26:12.702314745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:12.703346 containerd[1509]: time="2025-01-29T16:26:12.702334994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.703346 containerd[1509]: time="2025-01-29T16:26:12.702486744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.703649 kubelet[2623]: I0129 16:26:12.703151 2623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:26:12.703649 kubelet[2623]: E0129 16:26:12.703644 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:12.772851 systemd-networkd[1445]: caliece14580c49: Link UP Jan 29 16:26:12.773125 systemd-networkd[1445]: caliece14580c49: Gained carrier Jan 29 16:26:12.778226 systemd[1]: Started cri-containerd-cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659.scope - libcontainer container cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659. Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.539 [INFO][4988] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.562 [INFO][4988] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0 calico-apiserver-5947747589- calico-apiserver d3ce4654-53de-4d6a-8744-f657f07eba4f 828 0 2025-01-29 16:25:49 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5947747589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5947747589-tzh8v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliece14580c49 [] []}} ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.562 [INFO][4988] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.624 [INFO][5021] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" HandleID="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Workload="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.661 [INFO][5021] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" HandleID="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Workload="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c6630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5947747589-tzh8v", "timestamp":"2025-01-29 16:26:12.624234298 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.661 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.662 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.662 [INFO][5021] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.666 [INFO][5021] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.673 [INFO][5021] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.686 [INFO][5021] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.691 [INFO][5021] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.696 [INFO][5021] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.696 [INFO][5021] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.700 [INFO][5021] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15 Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.716 [INFO][5021] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.734 [INFO][5021] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.736 [INFO][5021] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" host="localhost" Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.736 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:12.799669 containerd[1509]: 2025-01-29 16:26:12.736 [INFO][5021] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" HandleID="k8s-pod-network.6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Workload="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:12.801769 containerd[1509]: 2025-01-29 16:26:12.760 [INFO][4988] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0", GenerateName:"calico-apiserver-5947747589-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3ce4654-53de-4d6a-8744-f657f07eba4f", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5947747589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5947747589-tzh8v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliece14580c49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.801769 containerd[1509]: 2025-01-29 16:26:12.760 [INFO][4988] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:12.801769 containerd[1509]: 2025-01-29 16:26:12.760 [INFO][4988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliece14580c49 ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:12.801769 containerd[1509]: 2025-01-29 16:26:12.769 [INFO][4988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:12.801769 containerd[1509]: 2025-01-29 16:26:12.769 [INFO][4988] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0", GenerateName:"calico-apiserver-5947747589-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3ce4654-53de-4d6a-8744-f657f07eba4f", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5947747589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15", Pod:"calico-apiserver-5947747589-tzh8v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliece14580c49", MAC:"2a:15:85:ad:38:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.801769 containerd[1509]: 2025-01-29 16:26:12.789 [INFO][4988] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15" Namespace="calico-apiserver" Pod="calico-apiserver-5947747589-tzh8v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5947747589--tzh8v-eth0" Jan 29 16:26:12.827116 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:12.880353 systemd[1]: run-netns-cni\x2d2983b690\x2d7066\x2dda97\x2db12f\x2ddbe0991c1ecd.mount: Deactivated successfully. Jan 29 16:26:12.882468 containerd[1509]: time="2025-01-29T16:26:12.882284330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjx6x,Uid:9cc09215-26d9-4b38-816c-abf4c3c659ad,Namespace:calico-system,Attempt:5,} returns sandbox id \"cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659\"" Jan 29 16:26:12.885364 containerd[1509]: time="2025-01-29T16:26:12.885255179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:12.889906 containerd[1509]: time="2025-01-29T16:26:12.885355801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:12.889906 containerd[1509]: time="2025-01-29T16:26:12.885379276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.889906 containerd[1509]: time="2025-01-29T16:26:12.885542548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:12.931745 systemd-networkd[1445]: califcaf318da5a: Link UP Jan 29 16:26:12.933306 systemd-networkd[1445]: califcaf318da5a: Gained carrier Jan 29 16:26:12.947642 systemd[1]: Started cri-containerd-6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15.scope - libcontainer container 6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15. Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.508 [INFO][4972] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.530 [INFO][4972] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0 coredns-6f6b679f8f- kube-system d39cd512-c288-44a0-b875-c359ef74dd3f 829 0 2025-01-29 16:25:42 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-6kc2r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califcaf318da5a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.531 [INFO][4972] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.604 [INFO][5013] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" HandleID="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Workload="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.661 [INFO][5013] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" HandleID="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Workload="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052b700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-6kc2r", "timestamp":"2025-01-29 16:26:12.60470121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.661 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.736 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.736 [INFO][5013] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.793 [INFO][5013] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.815 [INFO][5013] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.828 [INFO][5013] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.832 [INFO][5013] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.838 [INFO][5013] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.838 [INFO][5013] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.872 [INFO][5013] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.902 [INFO][5013] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.917 [INFO][5013] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.917 [INFO][5013] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" host="localhost" Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.917 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:12.961188 containerd[1509]: 2025-01-29 16:26:12.917 [INFO][5013] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" HandleID="k8s-pod-network.7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Workload="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:12.962592 containerd[1509]: 2025-01-29 16:26:12.927 [INFO][4972] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d39cd512-c288-44a0-b875-c359ef74dd3f", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-6kc2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califcaf318da5a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.962592 containerd[1509]: 2025-01-29 16:26:12.928 [INFO][4972] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:12.962592 containerd[1509]: 2025-01-29 16:26:12.928 [INFO][4972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcaf318da5a ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:12.962592 containerd[1509]: 2025-01-29 16:26:12.932 [INFO][4972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:12.962592 containerd[1509]: 2025-01-29 16:26:12.934 [INFO][4972] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d39cd512-c288-44a0-b875-c359ef74dd3f", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 25, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d", Pod:"coredns-6f6b679f8f-6kc2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califcaf318da5a", MAC:"06:16:6f:2e:17:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:12.962592 containerd[1509]: 2025-01-29 16:26:12.955 [INFO][4972] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d" Namespace="kube-system" Pod="coredns-6f6b679f8f-6kc2r" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6kc2r-eth0" Jan 29 16:26:13.002822 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:13.085496 containerd[1509]: time="2025-01-29T16:26:13.085348652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5947747589-tzh8v,Uid:d3ce4654-53de-4d6a-8744-f657f07eba4f,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15\"" Jan 29 16:26:13.089316 containerd[1509]: time="2025-01-29T16:26:13.089157751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:13.089604 containerd[1509]: time="2025-01-29T16:26:13.089288670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:13.089604 containerd[1509]: time="2025-01-29T16:26:13.089316924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:13.091213 containerd[1509]: time="2025-01-29T16:26:13.091137993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:13.128702 systemd[1]: Started cri-containerd-7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d.scope - libcontainer container 7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d. Jan 29 16:26:13.156785 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:13.212443 containerd[1509]: time="2025-01-29T16:26:13.212342309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6kc2r,Uid:d39cd512-c288-44a0-b875-c359ef74dd3f,Namespace:kube-system,Attempt:5,} returns sandbox id \"7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d\"" Jan 29 16:26:13.213654 kubelet[2623]: E0129 16:26:13.213568 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:13.218004 containerd[1509]: time="2025-01-29T16:26:13.217688985Z" level=info msg="CreateContainer within sandbox \"7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:26:13.276565 kernel: bpftool[5343]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:26:13.282212 systemd-networkd[1445]: calic935704bf7d: Gained IPv6LL Jan 29 16:26:13.292372 containerd[1509]: time="2025-01-29T16:26:13.292289141Z" level=info msg="CreateContainer within sandbox \"7e4e8d2bbddc6ab1d33a086e68b1e10a9a3be3c87558364e199ea3a9162cac3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4b6aed46a2f7423dd0a1f58f73cda934fe8f135e7dc53ac1515e7a7e6f9236a\"" Jan 29 16:26:13.293345 containerd[1509]: time="2025-01-29T16:26:13.293304181Z" level=info msg="StartContainer for \"d4b6aed46a2f7423dd0a1f58f73cda934fe8f135e7dc53ac1515e7a7e6f9236a\"" Jan 29 16:26:13.351884 systemd[1]: Started cri-containerd-d4b6aed46a2f7423dd0a1f58f73cda934fe8f135e7dc53ac1515e7a7e6f9236a.scope - libcontainer container d4b6aed46a2f7423dd0a1f58f73cda934fe8f135e7dc53ac1515e7a7e6f9236a. Jan 29 16:26:13.407656 containerd[1509]: time="2025-01-29T16:26:13.407322186Z" level=info msg="StartContainer for \"d4b6aed46a2f7423dd0a1f58f73cda934fe8f135e7dc53ac1515e7a7e6f9236a\" returns successfully" Jan 29 16:26:13.440697 kubelet[2623]: E0129 16:26:13.440642 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:13.448922 kubelet[2623]: E0129 16:26:13.448879 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:13.467255 kubelet[2623]: I0129 16:26:13.466475 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6kc2r" podStartSLOduration=31.466457036 podStartE2EDuration="31.466457036s" podCreationTimestamp="2025-01-29 16:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:13.465822464 +0000 UTC m=+36.075319185" watchObservedRunningTime="2025-01-29 16:26:13.466457036 +0000 UTC m=+36.075953757" Jan 29 16:26:13.538258 systemd-networkd[1445]: calie87059a3460: Gained IPv6LL Jan 29 16:26:13.667860 systemd-networkd[1445]: vxlan.calico: Link UP Jan 29 16:26:13.667873 systemd-networkd[1445]: vxlan.calico: Gained carrier Jan 29 16:26:14.049517 systemd-networkd[1445]: calic1367680068: Gained IPv6LL Jan 29 16:26:14.185444 kubelet[2623]: I0129 16:26:14.185321 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-llv2c" podStartSLOduration=32.185303624 podStartE2EDuration="32.185303624s" podCreationTimestamp="2025-01-29 16:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:13.494044314 +0000 UTC m=+36.103541024" watchObservedRunningTime="2025-01-29 16:26:14.185303624 +0000 UTC m=+36.794800334" Jan 29 16:26:14.241583 systemd-networkd[1445]: calid1ac614fb32: Gained IPv6LL Jan 29 16:26:14.457945 kubelet[2623]: E0129 16:26:14.457805 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:14.458791 kubelet[2623]: E0129 16:26:14.458764 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:14.561643 systemd-networkd[1445]: caliece14580c49: Gained IPv6LL Jan 29 16:26:14.817536 systemd-networkd[1445]: vxlan.calico: Gained IPv6LL Jan 29 16:26:14.946516 systemd-networkd[1445]: califcaf318da5a: Gained IPv6LL Jan 29 16:26:15.460911 kubelet[2623]: E0129 16:26:15.460877 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:15.714749 containerd[1509]: time="2025-01-29T16:26:15.714640040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:15.715566 containerd[1509]: time="2025-01-29T16:26:15.715532624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 16:26:15.716690 containerd[1509]: time="2025-01-29T16:26:15.716648193Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:15.718940 containerd[1509]: time="2025-01-29T16:26:15.718860407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:15.719465 containerd[1509]: time="2025-01-29T16:26:15.719430214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.434894347s" Jan 29 16:26:15.719513 containerd[1509]: time="2025-01-29T16:26:15.719464880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 16:26:15.720347 containerd[1509]: time="2025-01-29T16:26:15.720322838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 16:26:15.721375 containerd[1509]: time="2025-01-29T16:26:15.721261550Z" level=info msg="CreateContainer within sandbox \"9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 16:26:15.735898 containerd[1509]: time="2025-01-29T16:26:15.735854205Z" level=info msg="CreateContainer within sandbox \"9d5a78aff3bbe4544fc22dc3b0c5ec02eb0931425d79c7219aacb6ffba866594\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50a04a9a3434affe2498b54c8bad394899fa80809a843df72b369aaa59f1a8ac\"" Jan 29 16:26:15.736378 containerd[1509]: time="2025-01-29T16:26:15.736348890Z" level=info msg="StartContainer for \"50a04a9a3434affe2498b54c8bad394899fa80809a843df72b369aaa59f1a8ac\"" Jan 29 16:26:15.776004 systemd[1]: Started cri-containerd-50a04a9a3434affe2498b54c8bad394899fa80809a843df72b369aaa59f1a8ac.scope - libcontainer container 50a04a9a3434affe2498b54c8bad394899fa80809a843df72b369aaa59f1a8ac. Jan 29 16:26:16.056706 containerd[1509]: time="2025-01-29T16:26:16.056666752Z" level=info msg="StartContainer for \"50a04a9a3434affe2498b54c8bad394899fa80809a843df72b369aaa59f1a8ac\" returns successfully" Jan 29 16:26:16.203176 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:39802.service - OpenSSH per-connection server daemon (10.0.0.1:39802). Jan 29 16:26:16.251700 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 39802 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:16.253441 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:16.258607 systemd-logind[1494]: New session 11 of user core. Jan 29 16:26:16.266625 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:26:16.412607 sshd[5544]: Connection closed by 10.0.0.1 port 39802 Jan 29 16:26:16.414223 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:16.418440 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:39802.service: Deactivated successfully. Jan 29 16:26:16.420764 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:26:16.421560 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:26:16.422525 systemd-logind[1494]: Removed session 11. Jan 29 16:26:16.475634 kubelet[2623]: I0129 16:26:16.475300 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5947747589-n86vd" podStartSLOduration=24.038960715 podStartE2EDuration="27.475281917s" podCreationTimestamp="2025-01-29 16:25:49 +0000 UTC" firstStartedPulling="2025-01-29 16:26:12.283895062 +0000 UTC m=+34.893391772" lastFinishedPulling="2025-01-29 16:26:15.720216264 +0000 UTC m=+38.329712974" observedRunningTime="2025-01-29 16:26:16.474922772 +0000 UTC m=+39.084419482" watchObservedRunningTime="2025-01-29 16:26:16.475281917 +0000 UTC m=+39.084778627" Jan 29 16:26:17.467150 kubelet[2623]: I0129 16:26:17.467113 2623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:26:18.987953 containerd[1509]: time="2025-01-29T16:26:18.987894744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.988987 containerd[1509]: time="2025-01-29T16:26:18.988788488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 16:26:18.990003 containerd[1509]: time="2025-01-29T16:26:18.989971545Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.992157 containerd[1509]: time="2025-01-29T16:26:18.992129961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.992858 containerd[1509]: time="2025-01-29T16:26:18.992814215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.272463644s" Jan 29 16:26:18.992858 containerd[1509]: time="2025-01-29T16:26:18.992852528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 16:26:18.993777 containerd[1509]: time="2025-01-29T16:26:18.993752695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:26:19.005246 containerd[1509]: time="2025-01-29T16:26:19.004577822Z" level=info msg="CreateContainer within sandbox \"ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 16:26:19.025732 containerd[1509]: time="2025-01-29T16:26:19.025681869Z" level=info msg="CreateContainer within sandbox \"ad560129c87123d7ac6322b380957599d2fff859ab37bff344e1d810aff272c9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"22b6ec88b85b50e7edf6b15f6fd9f4d161203371008322ae2a01b786b7548ece\"" Jan 29 16:26:19.026173 containerd[1509]: time="2025-01-29T16:26:19.026146735Z" level=info msg="StartContainer for \"22b6ec88b85b50e7edf6b15f6fd9f4d161203371008322ae2a01b786b7548ece\"" Jan 29 16:26:19.064696 systemd[1]: Started cri-containerd-22b6ec88b85b50e7edf6b15f6fd9f4d161203371008322ae2a01b786b7548ece.scope - libcontainer container 22b6ec88b85b50e7edf6b15f6fd9f4d161203371008322ae2a01b786b7548ece. Jan 29 16:26:19.125812 containerd[1509]: time="2025-01-29T16:26:19.125700686Z" level=info msg="StartContainer for \"22b6ec88b85b50e7edf6b15f6fd9f4d161203371008322ae2a01b786b7548ece\" returns successfully" Jan 29 16:26:19.534486 kubelet[2623]: I0129 16:26:19.534247 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66b4c55cd5-pmg6b" podStartSLOduration=23.924007412999998 podStartE2EDuration="30.534229527s" podCreationTimestamp="2025-01-29 16:25:49 +0000 UTC" firstStartedPulling="2025-01-29 16:26:12.383383089 +0000 UTC m=+34.992879799" lastFinishedPulling="2025-01-29 16:26:18.993605203 +0000 UTC m=+41.603101913" observedRunningTime="2025-01-29 16:26:19.49183062 +0000 UTC m=+42.101327350" watchObservedRunningTime="2025-01-29 16:26:19.534229527 +0000 UTC m=+42.143726237" Jan 29 16:26:20.563743 containerd[1509]: time="2025-01-29T16:26:20.563686446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:20.564384 containerd[1509]: time="2025-01-29T16:26:20.564337837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 16:26:20.565475 containerd[1509]: time="2025-01-29T16:26:20.565441560Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:20.567620 containerd[1509]: time="2025-01-29T16:26:20.567562331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:20.568103 containerd[1509]: time="2025-01-29T16:26:20.568075610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.574206935s" Jan 29 16:26:20.568135 containerd[1509]: time="2025-01-29T16:26:20.568105026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 16:26:20.569074 containerd[1509]: time="2025-01-29T16:26:20.569042612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 16:26:20.569788 containerd[1509]: time="2025-01-29T16:26:20.569765971Z" level=info msg="CreateContainer within sandbox \"cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:26:20.593835 containerd[1509]: time="2025-01-29T16:26:20.593795219Z" level=info msg="CreateContainer within sandbox \"cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4192e3c2949fe6d33697d58868bb7e497356854c3c448100aa713443c5136316\"" Jan 29 16:26:20.594866 containerd[1509]: time="2025-01-29T16:26:20.594309599Z" level=info msg="StartContainer for \"4192e3c2949fe6d33697d58868bb7e497356854c3c448100aa713443c5136316\"" Jan 29 16:26:20.626595 systemd[1]: Started cri-containerd-4192e3c2949fe6d33697d58868bb7e497356854c3c448100aa713443c5136316.scope - libcontainer container 4192e3c2949fe6d33697d58868bb7e497356854c3c448100aa713443c5136316. Jan 29 16:26:20.658780 containerd[1509]: time="2025-01-29T16:26:20.658733911Z" level=info msg="StartContainer for \"4192e3c2949fe6d33697d58868bb7e497356854c3c448100aa713443c5136316\" returns successfully" Jan 29 16:26:21.111619 containerd[1509]: time="2025-01-29T16:26:21.111551707Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:21.112353 containerd[1509]: time="2025-01-29T16:26:21.112293650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 16:26:21.114189 containerd[1509]: time="2025-01-29T16:26:21.114158604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 545.085402ms" Jan 29 16:26:21.114189 containerd[1509]: time="2025-01-29T16:26:21.114187317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 16:26:21.115242 containerd[1509]: time="2025-01-29T16:26:21.115061132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:26:21.116132 containerd[1509]: time="2025-01-29T16:26:21.116100492Z" level=info msg="CreateContainer within sandbox \"6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 16:26:21.129797 containerd[1509]: time="2025-01-29T16:26:21.129763009Z" level=info msg="CreateContainer within sandbox \"6128276eb32c915d23683b7411e168b79a5d4c1febb38f7a171f1888fed46b15\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ba2a327dc70d73e16ab6d314721c5be59b06d353277e15715953460afc6cae0\"" Jan 29 16:26:21.130295 containerd[1509]: time="2025-01-29T16:26:21.130240208Z" level=info msg="StartContainer for \"2ba2a327dc70d73e16ab6d314721c5be59b06d353277e15715953460afc6cae0\"" Jan 29 16:26:21.168588 systemd[1]: Started cri-containerd-2ba2a327dc70d73e16ab6d314721c5be59b06d353277e15715953460afc6cae0.scope - libcontainer container 2ba2a327dc70d73e16ab6d314721c5be59b06d353277e15715953460afc6cae0. Jan 29 16:26:21.223449 containerd[1509]: time="2025-01-29T16:26:21.223409865Z" level=info msg="StartContainer for \"2ba2a327dc70d73e16ab6d314721c5be59b06d353277e15715953460afc6cae0\" returns successfully" Jan 29 16:26:21.425568 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:44972.service - OpenSSH per-connection server daemon (10.0.0.1:44972). Jan 29 16:26:21.476674 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 44972 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:21.478164 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:21.482921 systemd-logind[1494]: New session 12 of user core. Jan 29 16:26:21.487778 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:26:21.503004 kubelet[2623]: I0129 16:26:21.502946 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5947747589-tzh8v" podStartSLOduration=24.476185113 podStartE2EDuration="32.502918011s" podCreationTimestamp="2025-01-29 16:25:49 +0000 UTC" firstStartedPulling="2025-01-29 16:26:13.088214908 +0000 UTC m=+35.697711619" lastFinishedPulling="2025-01-29 16:26:21.114947807 +0000 UTC m=+43.724444517" observedRunningTime="2025-01-29 16:26:21.50177317 +0000 UTC m=+44.111269880" watchObservedRunningTime="2025-01-29 16:26:21.502918011 +0000 UTC m=+44.112414721" Jan 29 16:26:21.646560 sshd[5713]: Connection closed by 10.0.0.1 port 44972 Jan 29 16:26:21.648329 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:21.652982 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:44972.service: Deactivated successfully. Jan 29 16:26:21.657078 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:26:21.658724 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:26:21.660191 systemd-logind[1494]: Removed session 12. Jan 29 16:26:23.375918 containerd[1509]: time="2025-01-29T16:26:23.375869919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.421138 containerd[1509]: time="2025-01-29T16:26:23.421074377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 16:26:23.491502 containerd[1509]: time="2025-01-29T16:26:23.491433790Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.526459 containerd[1509]: time="2025-01-29T16:26:23.526414332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.526984 containerd[1509]: time="2025-01-29T16:26:23.526951054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.411855968s" Jan 29 16:26:23.526984 containerd[1509]: time="2025-01-29T16:26:23.526977495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 16:26:23.528965 containerd[1509]: time="2025-01-29T16:26:23.528924181Z" level=info msg="CreateContainer within sandbox \"cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:26:23.584056 containerd[1509]: time="2025-01-29T16:26:23.584002970Z" level=info msg="CreateContainer within sandbox \"cafc652cfd1d968bc8e89e740eafe7399b7cf7b80c37452530f9682c6029a659\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1c918ef02cf6766fe8ec20431c92fd4c77c7054637f7094247138e85d1636d29\"" Jan 29 16:26:23.585950 containerd[1509]: time="2025-01-29T16:26:23.584541174Z" level=info msg="StartContainer for \"1c918ef02cf6766fe8ec20431c92fd4c77c7054637f7094247138e85d1636d29\"" Jan 29 16:26:23.622700 systemd[1]: Started cri-containerd-1c918ef02cf6766fe8ec20431c92fd4c77c7054637f7094247138e85d1636d29.scope - libcontainer container 1c918ef02cf6766fe8ec20431c92fd4c77c7054637f7094247138e85d1636d29. Jan 29 16:26:23.663199 containerd[1509]: time="2025-01-29T16:26:23.663071635Z" level=info msg="StartContainer for \"1c918ef02cf6766fe8ec20431c92fd4c77c7054637f7094247138e85d1636d29\" returns successfully" Jan 29 16:26:24.515127 kubelet[2623]: I0129 16:26:24.515053 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mjx6x" podStartSLOduration=24.872294574 podStartE2EDuration="35.515035081s" podCreationTimestamp="2025-01-29 16:25:49 +0000 UTC" firstStartedPulling="2025-01-29 16:26:12.884849163 +0000 UTC m=+35.494345873" lastFinishedPulling="2025-01-29 16:26:23.52758967 +0000 UTC m=+46.137086380" observedRunningTime="2025-01-29 16:26:24.511512296 +0000 UTC m=+47.121009016" watchObservedRunningTime="2025-01-29 16:26:24.515035081 +0000 UTC m=+47.124531791" Jan 29 16:26:24.524906 kubelet[2623]: I0129 16:26:24.524877 2623 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:26:24.524982 kubelet[2623]: I0129 16:26:24.524917 2623 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:26:26.659581 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:44978.service - OpenSSH per-connection server daemon (10.0.0.1:44978). Jan 29 16:26:26.709487 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 44978 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:26.711249 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:26.715803 systemd-logind[1494]: New session 13 of user core. Jan 29 16:26:26.723536 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:26:26.845620 sshd[5784]: Connection closed by 10.0.0.1 port 44978 Jan 29 16:26:26.845976 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:26.858193 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:44978.service: Deactivated successfully. Jan 29 16:26:26.860222 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:26:26.861806 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:26:26.871636 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:44984.service - OpenSSH per-connection server daemon (10.0.0.1:44984). Jan 29 16:26:26.872497 systemd-logind[1494]: Removed session 13. Jan 29 16:26:26.909000 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 44984 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:26.910354 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:26.914569 systemd-logind[1494]: New session 14 of user core. Jan 29 16:26:26.926508 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:26:27.073408 sshd[5802]: Connection closed by 10.0.0.1 port 44984 Jan 29 16:26:27.076699 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:27.088794 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:44984.service: Deactivated successfully. Jan 29 16:26:27.092172 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:26:27.094794 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:26:27.103079 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:44996.service - OpenSSH per-connection server daemon (10.0.0.1:44996). Jan 29 16:26:27.104782 systemd-logind[1494]: Removed session 14. Jan 29 16:26:27.139895 sshd[5812]: Accepted publickey for core from 10.0.0.1 port 44996 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:27.141233 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:27.145312 systemd-logind[1494]: New session 15 of user core. Jan 29 16:26:27.155545 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:26:27.261065 sshd[5815]: Connection closed by 10.0.0.1 port 44996 Jan 29 16:26:27.261301 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:27.264854 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:44996.service: Deactivated successfully. Jan 29 16:26:27.266806 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:26:27.267449 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:26:27.268253 systemd-logind[1494]: Removed session 15. Jan 29 16:26:32.277530 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:36280.service - OpenSSH per-connection server daemon (10.0.0.1:36280). Jan 29 16:26:32.317207 sshd[5829]: Accepted publickey for core from 10.0.0.1 port 36280 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:32.318711 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:32.322790 systemd-logind[1494]: New session 16 of user core. Jan 29 16:26:32.336610 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:26:32.453717 sshd[5831]: Connection closed by 10.0.0.1 port 36280 Jan 29 16:26:32.454109 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:32.459840 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:36280.service: Deactivated successfully. Jan 29 16:26:32.462697 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:26:32.464021 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:26:32.465693 systemd-logind[1494]: Removed session 16. Jan 29 16:26:37.461022 containerd[1509]: time="2025-01-29T16:26:37.460974671Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:37.461991 containerd[1509]: time="2025-01-29T16:26:37.461083327Z" level=info msg="TearDown network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" successfully" Jan 29 16:26:37.461991 containerd[1509]: time="2025-01-29T16:26:37.461094228Z" level=info msg="StopPodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" returns successfully" Jan 29 16:26:37.461991 containerd[1509]: time="2025-01-29T16:26:37.461411331Z" level=info msg="RemovePodSandbox for \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:37.472450 containerd[1509]: time="2025-01-29T16:26:37.472379967Z" level=info msg="Forcibly stopping sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\"" Jan 29 16:26:37.472585 containerd[1509]: time="2025-01-29T16:26:37.472512439Z" level=info msg="TearDown network for sandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" successfully" Jan 29 16:26:37.476743 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:41180.service - OpenSSH per-connection server daemon (10.0.0.1:41180). Jan 29 16:26:37.484001 containerd[1509]: time="2025-01-29T16:26:37.483956419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.484116 containerd[1509]: time="2025-01-29T16:26:37.484028295Z" level=info msg="RemovePodSandbox \"c542512dfd6f7c599a551dfb8227387264de407d38ce36a55c5f0eed259da8f4\" returns successfully" Jan 29 16:26:37.484834 containerd[1509]: time="2025-01-29T16:26:37.484582258Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" Jan 29 16:26:37.484834 containerd[1509]: time="2025-01-29T16:26:37.484708779Z" level=info msg="TearDown network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" successfully" Jan 29 16:26:37.484834 containerd[1509]: time="2025-01-29T16:26:37.484720270Z" level=info msg="StopPodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" returns successfully" Jan 29 16:26:37.485134 containerd[1509]: time="2025-01-29T16:26:37.485113698Z" level=info msg="RemovePodSandbox for \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" Jan 29 16:26:37.485194 containerd[1509]: time="2025-01-29T16:26:37.485138625Z" level=info msg="Forcibly stopping sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\"" Jan 29 16:26:37.485242 containerd[1509]: time="2025-01-29T16:26:37.485221913Z" level=info msg="TearDown network for sandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" successfully" Jan 29 16:26:37.489284 containerd[1509]: time="2025-01-29T16:26:37.489258030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.489377 containerd[1509]: time="2025-01-29T16:26:37.489305932Z" level=info msg="RemovePodSandbox \"fbd85c2d6a90e508c796cd5067ec7bc464a7bb68d792e09de2a3dd7d16ce7cce\" returns successfully" Jan 29 16:26:37.489821 containerd[1509]: time="2025-01-29T16:26:37.489675093Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\"" Jan 29 16:26:37.489821 containerd[1509]: time="2025-01-29T16:26:37.489765284Z" level=info msg="TearDown network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" successfully" Jan 29 16:26:37.489821 containerd[1509]: time="2025-01-29T16:26:37.489774992Z" level=info msg="StopPodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" returns successfully" Jan 29 16:26:37.490462 containerd[1509]: time="2025-01-29T16:26:37.490425288Z" level=info msg="RemovePodSandbox for \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\"" Jan 29 16:26:37.490462 containerd[1509]: time="2025-01-29T16:26:37.490444233Z" level=info msg="Forcibly stopping sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\"" Jan 29 16:26:37.490622 containerd[1509]: time="2025-01-29T16:26:37.490516781Z" level=info msg="TearDown network for sandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" successfully" Jan 29 16:26:37.494372 containerd[1509]: time="2025-01-29T16:26:37.494344863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.494495 containerd[1509]: time="2025-01-29T16:26:37.494388646Z" level=info msg="RemovePodSandbox \"28c3568cc3fa6364c60fd3cd74421758e8a07dbe95620ed2b1124cb5538265c0\" returns successfully" Jan 29 16:26:37.494730 containerd[1509]: time="2025-01-29T16:26:37.494714786Z" level=info msg="StopPodSandbox for \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\"" Jan 29 16:26:37.494812 containerd[1509]: time="2025-01-29T16:26:37.494797523Z" level=info msg="TearDown network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" successfully" Jan 29 16:26:37.494849 containerd[1509]: time="2025-01-29T16:26:37.494810649Z" level=info msg="StopPodSandbox for \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" returns successfully" Jan 29 16:26:37.495110 containerd[1509]: time="2025-01-29T16:26:37.495086462Z" level=info msg="RemovePodSandbox for \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\"" Jan 29 16:26:37.495110 containerd[1509]: time="2025-01-29T16:26:37.495105358Z" level=info msg="Forcibly stopping sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\"" Jan 29 16:26:37.495201 containerd[1509]: time="2025-01-29T16:26:37.495168788Z" level=info msg="TearDown network for sandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" successfully" Jan 29 16:26:37.499565 containerd[1509]: time="2025-01-29T16:26:37.498742318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.499565 containerd[1509]: time="2025-01-29T16:26:37.498781772Z" level=info msg="RemovePodSandbox \"c08b9f019c733fd917b54e0bf91b8f4112e3adcd9ad6553afba700b4e939a2dc\" returns successfully" Jan 29 16:26:37.499565 containerd[1509]: time="2025-01-29T16:26:37.499174348Z" level=info msg="StopPodSandbox for \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\"" Jan 29 16:26:37.499565 containerd[1509]: time="2025-01-29T16:26:37.499316908Z" level=info msg="TearDown network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\" successfully" Jan 29 16:26:37.499565 containerd[1509]: time="2025-01-29T16:26:37.499333420Z" level=info msg="StopPodSandbox for \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\" returns successfully" Jan 29 16:26:37.499752 containerd[1509]: time="2025-01-29T16:26:37.499650293Z" level=info msg="RemovePodSandbox for \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\"" Jan 29 16:26:37.499752 containerd[1509]: time="2025-01-29T16:26:37.499669138Z" level=info msg="Forcibly stopping sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\"" Jan 29 16:26:37.499803 containerd[1509]: time="2025-01-29T16:26:37.499739031Z" level=info msg="TearDown network for sandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\" successfully" Jan 29 16:26:37.507974 containerd[1509]: time="2025-01-29T16:26:37.507927396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.508083 containerd[1509]: time="2025-01-29T16:26:37.508005214Z" level=info msg="RemovePodSandbox \"4d506bde26da0d64144cf78932ae6b5a8a98b498c435a525b4884bbce7d09110\" returns successfully" Jan 29 16:26:37.508293 containerd[1509]: time="2025-01-29T16:26:37.508263254Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:37.508421 containerd[1509]: time="2025-01-29T16:26:37.508365318Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:37.508421 containerd[1509]: time="2025-01-29T16:26:37.508375347Z" level=info msg="StopPodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:37.509517 containerd[1509]: time="2025-01-29T16:26:37.508659968Z" level=info msg="RemovePodSandbox for \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:37.509517 containerd[1509]: time="2025-01-29T16:26:37.508681619Z" level=info msg="Forcibly stopping sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\"" Jan 29 16:26:37.509517 containerd[1509]: time="2025-01-29T16:26:37.508751912Z" level=info msg="TearDown network for sandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" successfully" Jan 29 16:26:37.515532 containerd[1509]: time="2025-01-29T16:26:37.515491836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.515676 containerd[1509]: time="2025-01-29T16:26:37.515562951Z" level=info msg="RemovePodSandbox \"3571135cc446d25e04b98914e3184064da4831070cdbb18edff40ccd0c69df4c\" returns successfully" Jan 29 16:26:37.515891 containerd[1509]: time="2025-01-29T16:26:37.515856348Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:37.515982 containerd[1509]: time="2025-01-29T16:26:37.515957941Z" level=info msg="TearDown network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" successfully" Jan 29 16:26:37.515982 containerd[1509]: time="2025-01-29T16:26:37.515974102Z" level=info msg="StopPodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" returns successfully" Jan 29 16:26:37.516279 containerd[1509]: time="2025-01-29T16:26:37.516247622Z" level=info msg="RemovePodSandbox for \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:37.516369 containerd[1509]: time="2025-01-29T16:26:37.516281997Z" level=info msg="Forcibly stopping sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\"" Jan 29 16:26:37.516410 containerd[1509]: time="2025-01-29T16:26:37.516380364Z" level=info msg="TearDown network for sandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" successfully" Jan 29 16:26:37.520285 containerd[1509]: time="2025-01-29T16:26:37.520252649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.520332 containerd[1509]: time="2025-01-29T16:26:37.520291894Z" level=info msg="RemovePodSandbox \"89c769ddda1afedc8f521d5148e65768052590ad45c773f382d9c4a28dcbd5e6\" returns successfully" Jan 29 16:26:37.520583 containerd[1509]: time="2025-01-29T16:26:37.520543653Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" Jan 29 16:26:37.520658 containerd[1509]: time="2025-01-29T16:26:37.520641479Z" level=info msg="TearDown network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" successfully" Jan 29 16:26:37.520658 containerd[1509]: time="2025-01-29T16:26:37.520655715Z" level=info msg="StopPodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" returns successfully" Jan 29 16:26:37.520923 containerd[1509]: time="2025-01-29T16:26:37.520898156Z" level=info msg="RemovePodSandbox for \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" Jan 29 16:26:37.520923 containerd[1509]: time="2025-01-29T16:26:37.520919958Z" level=info msg="Forcibly stopping sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\"" Jan 29 16:26:37.521014 containerd[1509]: time="2025-01-29T16:26:37.520979270Z" level=info msg="TearDown network for sandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" successfully" Jan 29 16:26:37.521778 sshd[5877]: Accepted publickey for core from 10.0.0.1 port 41180 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:37.524760 containerd[1509]: time="2025-01-29T16:26:37.524713764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.524834 containerd[1509]: time="2025-01-29T16:26:37.524764010Z" level=info msg="RemovePodSandbox \"d3cc7d9b8478d36f215c60eeb55fecb5880d97684b29106fefe8d70ae9a54509\" returns successfully" Jan 29 16:26:37.524792 sshd-session[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:37.525540 containerd[1509]: time="2025-01-29T16:26:37.525512833Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\"" Jan 29 16:26:37.525944 containerd[1509]: time="2025-01-29T16:26:37.525891712Z" level=info msg="TearDown network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" successfully" Jan 29 16:26:37.525944 containerd[1509]: time="2025-01-29T16:26:37.525908263Z" level=info msg="StopPodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" returns successfully" Jan 29 16:26:37.526190 containerd[1509]: time="2025-01-29T16:26:37.526161284Z" level=info msg="RemovePodSandbox for \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\"" Jan 29 16:26:37.526190 containerd[1509]: time="2025-01-29T16:26:37.526185250Z" level=info msg="Forcibly stopping sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\"" Jan 29 16:26:37.526312 containerd[1509]: time="2025-01-29T16:26:37.526256505Z" level=info msg="TearDown network for sandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" successfully" Jan 29 16:26:37.529804 containerd[1509]: time="2025-01-29T16:26:37.529773857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.529885 containerd[1509]: time="2025-01-29T16:26:37.529812811Z" level=info msg="RemovePodSandbox \"690ccfa82e1a22a699885aafd6d4da5ade15a2805110567540665a4301050063\" returns successfully" Jan 29 16:26:37.530220 containerd[1509]: time="2025-01-29T16:26:37.530076773Z" level=info msg="StopPodSandbox for \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\"" Jan 29 16:26:37.530220 containerd[1509]: time="2025-01-29T16:26:37.530156484Z" level=info msg="TearDown network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" successfully" Jan 29 16:26:37.530220 containerd[1509]: time="2025-01-29T16:26:37.530165281Z" level=info msg="StopPodSandbox for \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" returns successfully" Jan 29 16:26:37.530422 containerd[1509]: time="2025-01-29T16:26:37.530381722Z" level=info msg="RemovePodSandbox for \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\"" Jan 29 16:26:37.530565 containerd[1509]: time="2025-01-29T16:26:37.530548639Z" level=info msg="Forcibly stopping sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\"" Jan 29 16:26:37.530672 containerd[1509]: time="2025-01-29T16:26:37.530632939Z" level=info msg="TearDown network for sandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" successfully" Jan 29 16:26:37.531326 systemd-logind[1494]: New session 17 of user core. Jan 29 16:26:37.534377 containerd[1509]: time="2025-01-29T16:26:37.534352014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.534434 containerd[1509]: time="2025-01-29T16:26:37.534388824Z" level=info msg="RemovePodSandbox \"91b0cd6d1649a0f56c1acdf46e0cd6235813234b1f855160fdf81c466965df73\" returns successfully" Jan 29 16:26:37.534722 containerd[1509]: time="2025-01-29T16:26:37.534692130Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:37.534812 containerd[1509]: time="2025-01-29T16:26:37.534773444Z" level=info msg="TearDown network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" successfully" Jan 29 16:26:37.534812 containerd[1509]: time="2025-01-29T16:26:37.534783373Z" level=info msg="StopPodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" returns successfully" Jan 29 16:26:37.535029 containerd[1509]: time="2025-01-29T16:26:37.535008080Z" level=info msg="RemovePodSandbox for \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:37.535029 containerd[1509]: time="2025-01-29T16:26:37.535025283Z" level=info msg="Forcibly stopping sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\"" Jan 29 16:26:37.535121 containerd[1509]: time="2025-01-29T16:26:37.535089435Z" level=info msg="TearDown network for sandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" successfully" Jan 29 16:26:37.538488 containerd[1509]: time="2025-01-29T16:26:37.538449498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.538557 containerd[1509]: time="2025-01-29T16:26:37.538493341Z" level=info msg="RemovePodSandbox \"80613e6e8792e4b13fd13e80e0db5ec81ebc069eab43ff546ca84e01f3a6563c\" returns successfully" Jan 29 16:26:37.538812 containerd[1509]: time="2025-01-29T16:26:37.538767272Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" Jan 29 16:26:37.538936 containerd[1509]: time="2025-01-29T16:26:37.538907218Z" level=info msg="TearDown network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" successfully" Jan 29 16:26:37.538967 containerd[1509]: time="2025-01-29T16:26:37.538931935Z" level=info msg="StopPodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" returns successfully" Jan 29 16:26:37.539183 containerd[1509]: time="2025-01-29T16:26:37.539155659Z" level=info msg="RemovePodSandbox for \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" Jan 29 16:26:37.539183 containerd[1509]: time="2025-01-29T16:26:37.539177290Z" level=info msg="Forcibly stopping sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\"" Jan 29 16:26:37.539267 containerd[1509]: time="2025-01-29T16:26:37.539242164Z" level=info msg="TearDown network for sandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" successfully" Jan 29 16:26:37.540574 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:26:37.542730 containerd[1509]: time="2025-01-29T16:26:37.542702397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.542779 containerd[1509]: time="2025-01-29T16:26:37.542750268Z" level=info msg="RemovePodSandbox \"73cdd7ecbb3a29be4ac5702a93175698457c609eac06dd1aa5b97ffe75669d6b\" returns successfully" Jan 29 16:26:37.543052 containerd[1509]: time="2025-01-29T16:26:37.543021032Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\"" Jan 29 16:26:37.543134 containerd[1509]: time="2025-01-29T16:26:37.543115241Z" level=info msg="TearDown network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" successfully" Jan 29 16:26:37.543134 containerd[1509]: time="2025-01-29T16:26:37.543130070Z" level=info msg="StopPodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" returns successfully" Jan 29 16:26:37.543441 containerd[1509]: time="2025-01-29T16:26:37.543412546Z" level=info msg="RemovePodSandbox for \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\"" Jan 29 16:26:37.543441 containerd[1509]: time="2025-01-29T16:26:37.543438034Z" level=info msg="Forcibly stopping sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\"" Jan 29 16:26:37.543589 containerd[1509]: time="2025-01-29T16:26:37.543516714Z" level=info msg="TearDown network for sandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" successfully" Jan 29 16:26:37.546990 containerd[1509]: time="2025-01-29T16:26:37.546954615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.547051 containerd[1509]: time="2025-01-29T16:26:37.547004740Z" level=info msg="RemovePodSandbox \"04ff4464dcae10e03c7c8c662563c51a9dba7e5760bb943cc6742dca34891b69\" returns successfully" Jan 29 16:26:37.550702 containerd[1509]: time="2025-01-29T16:26:37.550661406Z" level=info msg="StopPodSandbox for \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\"" Jan 29 16:26:37.551005 containerd[1509]: time="2025-01-29T16:26:37.550946227Z" level=info msg="TearDown network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" successfully" Jan 29 16:26:37.551005 containerd[1509]: time="2025-01-29T16:26:37.550991864Z" level=info msg="StopPodSandbox for \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" returns successfully" Jan 29 16:26:37.551358 containerd[1509]: time="2025-01-29T16:26:37.551322091Z" level=info msg="RemovePodSandbox for \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\"" Jan 29 16:26:37.551358 containerd[1509]: time="2025-01-29T16:26:37.551353140Z" level=info msg="Forcibly stopping sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\"" Jan 29 16:26:37.551496 containerd[1509]: time="2025-01-29T16:26:37.551445716Z" level=info msg="TearDown network for sandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" successfully" Jan 29 16:26:37.555302 containerd[1509]: time="2025-01-29T16:26:37.555261605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.555356 containerd[1509]: time="2025-01-29T16:26:37.555337810Z" level=info msg="RemovePodSandbox \"119a8a78ec71f42f86da9c1adeb5e1e6145caff1ff14c8bee8b2c70a75d65991\" returns successfully" Jan 29 16:26:37.555834 containerd[1509]: time="2025-01-29T16:26:37.555680632Z" level=info msg="StopPodSandbox for \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\"" Jan 29 16:26:37.555834 containerd[1509]: time="2025-01-29T16:26:37.555773799Z" level=info msg="TearDown network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\" successfully" Jan 29 16:26:37.555834 containerd[1509]: time="2025-01-29T16:26:37.555783106Z" level=info msg="StopPodSandbox for \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\" returns successfully" Jan 29 16:26:37.556018 containerd[1509]: time="2025-01-29T16:26:37.555991843Z" level=info msg="RemovePodSandbox for \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\"" Jan 29 16:26:37.556048 containerd[1509]: time="2025-01-29T16:26:37.556021720Z" level=info msg="Forcibly stopping sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\"" Jan 29 16:26:37.556148 containerd[1509]: time="2025-01-29T16:26:37.556105779Z" level=info msg="TearDown network for sandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\" successfully" Jan 29 16:26:37.559742 containerd[1509]: time="2025-01-29T16:26:37.559709425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.559803 containerd[1509]: time="2025-01-29T16:26:37.559750874Z" level=info msg="RemovePodSandbox \"ac8b20e3f3a0da06a6c26bb782cabb422c86bb15c5576354dedc2773ed82ac6a\" returns successfully" Jan 29 16:26:37.560072 containerd[1509]: time="2025-01-29T16:26:37.560024293Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:37.560119 containerd[1509]: time="2025-01-29T16:26:37.560102010Z" level=info msg="TearDown network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" successfully" Jan 29 16:26:37.560119 containerd[1509]: time="2025-01-29T16:26:37.560115747Z" level=info msg="StopPodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" returns successfully" Jan 29 16:26:37.560417 containerd[1509]: time="2025-01-29T16:26:37.560347517Z" level=info msg="RemovePodSandbox for \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:37.560417 containerd[1509]: time="2025-01-29T16:26:37.560377865Z" level=info msg="Forcibly stopping sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\"" Jan 29 16:26:37.560531 containerd[1509]: time="2025-01-29T16:26:37.560493104Z" level=info msg="TearDown network for sandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" successfully" Jan 29 16:26:37.564313 containerd[1509]: time="2025-01-29T16:26:37.564274297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.564362 containerd[1509]: time="2025-01-29T16:26:37.564316517Z" level=info msg="RemovePodSandbox \"6f803a82001ca02c945f2e6b7210a749a92d9906b92a80b69b0e163f795fa089\" returns successfully" Jan 29 16:26:37.564580 containerd[1509]: time="2025-01-29T16:26:37.564555681Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" Jan 29 16:26:37.564691 containerd[1509]: time="2025-01-29T16:26:37.564658015Z" level=info msg="TearDown network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" successfully" Jan 29 16:26:37.564691 containerd[1509]: time="2025-01-29T16:26:37.564680929Z" level=info msg="StopPodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" returns successfully" Jan 29 16:26:37.564997 containerd[1509]: time="2025-01-29T16:26:37.564932166Z" level=info msg="RemovePodSandbox for \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" Jan 29 16:26:37.564997 containerd[1509]: time="2025-01-29T16:26:37.564965430Z" level=info msg="Forcibly stopping sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\"" Jan 29 16:26:37.565075 containerd[1509]: time="2025-01-29T16:26:37.565043668Z" level=info msg="TearDown network for sandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" successfully" Jan 29 16:26:37.568829 containerd[1509]: time="2025-01-29T16:26:37.568797669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.568870 containerd[1509]: time="2025-01-29T16:26:37.568842374Z" level=info msg="RemovePodSandbox \"96b0827dbf8e5606d30e50b23f068b90200a72f566131d45051077e1cc7a0833\" returns successfully" Jan 29 16:26:37.569158 containerd[1509]: time="2025-01-29T16:26:37.569137074Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\"" Jan 29 16:26:37.569272 containerd[1509]: time="2025-01-29T16:26:37.569252493Z" level=info msg="TearDown network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" successfully" Jan 29 16:26:37.569302 containerd[1509]: time="2025-01-29T16:26:37.569268124Z" level=info msg="StopPodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" returns successfully" Jan 29 16:26:37.569537 containerd[1509]: time="2025-01-29T16:26:37.569504682Z" level=info msg="RemovePodSandbox for \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\"" Jan 29 16:26:37.569537 containerd[1509]: time="2025-01-29T16:26:37.569532656Z" level=info msg="Forcibly stopping sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\"" Jan 29 16:26:37.569627 containerd[1509]: time="2025-01-29T16:26:37.569599112Z" level=info msg="TearDown network for sandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" successfully" Jan 29 16:26:37.573179 containerd[1509]: time="2025-01-29T16:26:37.573150368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.573244 containerd[1509]: time="2025-01-29T16:26:37.573183742Z" level=info msg="RemovePodSandbox \"ec0460cd410a661cdde582c63042a0e1996d8f51f79aa309b6c7ddd534e220a2\" returns successfully" Jan 29 16:26:37.573481 containerd[1509]: time="2025-01-29T16:26:37.573450057Z" level=info msg="StopPodSandbox for \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\"" Jan 29 16:26:37.573565 containerd[1509]: time="2025-01-29T16:26:37.573540309Z" level=info msg="TearDown network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" successfully" Jan 29 16:26:37.573565 containerd[1509]: time="2025-01-29T16:26:37.573554276Z" level=info msg="StopPodSandbox for \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" returns successfully" Jan 29 16:26:37.573806 containerd[1509]: time="2025-01-29T16:26:37.573783491Z" level=info msg="RemovePodSandbox for \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\"" Jan 29 16:26:37.573806 containerd[1509]: time="2025-01-29T16:26:37.573804020Z" level=info msg="Forcibly stopping sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\"" Jan 29 16:26:37.573903 containerd[1509]: time="2025-01-29T16:26:37.573863442Z" level=info msg="TearDown network for sandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" successfully" Jan 29 16:26:37.577304 containerd[1509]: time="2025-01-29T16:26:37.577264934Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.577304 containerd[1509]: time="2025-01-29T16:26:37.577296354Z" level=info msg="RemovePodSandbox \"6393677f7ea7f96abd5c92a594fcb9a79c707b58e57745409be3d465d27e8723\" returns successfully" Jan 29 16:26:37.577581 containerd[1509]: time="2025-01-29T16:26:37.577548373Z" level=info msg="StopPodSandbox for \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\"" Jan 29 16:26:37.577649 containerd[1509]: time="2025-01-29T16:26:37.577624548Z" level=info msg="TearDown network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\" successfully" Jan 29 16:26:37.577649 containerd[1509]: time="2025-01-29T16:26:37.577637762Z" level=info msg="StopPodSandbox for \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\" returns successfully" Jan 29 16:26:37.577897 containerd[1509]: time="2025-01-29T16:26:37.577856028Z" level=info msg="RemovePodSandbox for \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\"" Jan 29 16:26:37.577897 containerd[1509]: time="2025-01-29T16:26:37.577878640Z" level=info msg="Forcibly stopping sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\"" Jan 29 16:26:37.577979 containerd[1509]: time="2025-01-29T16:26:37.577946980Z" level=info msg="TearDown network for sandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\" successfully" Jan 29 16:26:37.581315 containerd[1509]: time="2025-01-29T16:26:37.581284450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.581366 containerd[1509]: time="2025-01-29T16:26:37.581320138Z" level=info msg="RemovePodSandbox \"0251b16f9c30c53d0933dc9a2303ac216a5a3eb5692cd45222413f5f16166980\" returns successfully" Jan 29 16:26:37.581593 containerd[1509]: time="2025-01-29T16:26:37.581568470Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:37.581683 containerd[1509]: time="2025-01-29T16:26:37.581658310Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:37.581683 containerd[1509]: time="2025-01-29T16:26:37.581673950Z" level=info msg="StopPodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:37.581908 containerd[1509]: time="2025-01-29T16:26:37.581887346Z" level=info msg="RemovePodSandbox for \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:37.581908 containerd[1509]: time="2025-01-29T16:26:37.581904808Z" level=info msg="Forcibly stopping sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\"" Jan 29 16:26:37.582006 containerd[1509]: time="2025-01-29T16:26:37.581969351Z" level=info msg="TearDown network for sandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" successfully" Jan 29 16:26:37.585382 containerd[1509]: time="2025-01-29T16:26:37.585352057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.585452 containerd[1509]: time="2025-01-29T16:26:37.585404027Z" level=info msg="RemovePodSandbox \"de8a7a0ab050984c8735cfa11d5229933bae466d7f1d96ffdea1154dd75d1a2e\" returns successfully" Jan 29 16:26:37.585675 containerd[1509]: time="2025-01-29T16:26:37.585650124Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:37.585748 containerd[1509]: time="2025-01-29T16:26:37.585731719Z" level=info msg="TearDown network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" successfully" Jan 29 16:26:37.585748 containerd[1509]: time="2025-01-29T16:26:37.585745755Z" level=info msg="StopPodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" returns successfully" Jan 29 16:26:37.585969 containerd[1509]: time="2025-01-29T16:26:37.585946166Z" level=info msg="RemovePodSandbox for \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:37.585969 containerd[1509]: time="2025-01-29T16:26:37.585963709Z" level=info msg="Forcibly stopping sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\"" Jan 29 16:26:37.586046 containerd[1509]: time="2025-01-29T16:26:37.586020948Z" level=info msg="TearDown network for sandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" successfully" Jan 29 16:26:37.589342 containerd[1509]: time="2025-01-29T16:26:37.589318152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.589404 containerd[1509]: time="2025-01-29T16:26:37.589354922Z" level=info msg="RemovePodSandbox \"bc304ec01bb42f3b7c928073ba7a61272630c33a925ff226544135c352091e09\" returns successfully" Jan 29 16:26:37.589623 containerd[1509]: time="2025-01-29T16:26:37.589586531Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" Jan 29 16:26:37.589674 containerd[1509]: time="2025-01-29T16:26:37.589657276Z" level=info msg="TearDown network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" successfully" Jan 29 16:26:37.589674 containerd[1509]: time="2025-01-29T16:26:37.589666343Z" level=info msg="StopPodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" returns successfully" Jan 29 16:26:37.589896 containerd[1509]: time="2025-01-29T16:26:37.589869789Z" level=info msg="RemovePodSandbox for \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" Jan 29 16:26:37.589896 containerd[1509]: time="2025-01-29T16:26:37.589890369Z" level=info msg="Forcibly stopping sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\"" Jan 29 16:26:37.590098 containerd[1509]: time="2025-01-29T16:26:37.590065842Z" level=info msg="TearDown network for sandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" successfully" Jan 29 16:26:37.593486 containerd[1509]: time="2025-01-29T16:26:37.593452626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.593541 containerd[1509]: time="2025-01-29T16:26:37.593492512Z" level=info msg="RemovePodSandbox \"5cd8e39f1e30813a2165a308458d9ab2e177e2553ff0c1361db211f972a133c4\" returns successfully" Jan 29 16:26:37.593742 containerd[1509]: time="2025-01-29T16:26:37.593725824Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\"" Jan 29 16:26:37.593810 containerd[1509]: time="2025-01-29T16:26:37.593799435Z" level=info msg="TearDown network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" successfully" Jan 29 16:26:37.593844 containerd[1509]: time="2025-01-29T16:26:37.593810555Z" level=info msg="StopPodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" returns successfully" Jan 29 16:26:37.594014 containerd[1509]: time="2025-01-29T16:26:37.593997651Z" level=info msg="RemovePodSandbox for \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\"" Jan 29 16:26:37.594050 containerd[1509]: time="2025-01-29T16:26:37.594016066Z" level=info msg="Forcibly stopping sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\"" Jan 29 16:26:37.594094 containerd[1509]: time="2025-01-29T16:26:37.594071331Z" level=info msg="TearDown network for sandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" successfully" Jan 29 16:26:37.597362 containerd[1509]: time="2025-01-29T16:26:37.597335341Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.597427 containerd[1509]: time="2025-01-29T16:26:37.597366471Z" level=info msg="RemovePodSandbox \"619589d1866a7c7f8ef9ba1a71b635087f45d65b4008e796ea8c7989f4604b57\" returns successfully" Jan 29 16:26:37.597711 containerd[1509]: time="2025-01-29T16:26:37.597682702Z" level=info msg="StopPodSandbox for \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\"" Jan 29 16:26:37.597823 containerd[1509]: time="2025-01-29T16:26:37.597795686Z" level=info msg="TearDown network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" successfully" Jan 29 16:26:37.597823 containerd[1509]: time="2025-01-29T16:26:37.597810795Z" level=info msg="StopPodSandbox for \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" returns successfully" Jan 29 16:26:37.598055 containerd[1509]: time="2025-01-29T16:26:37.598031945Z" level=info msg="RemovePodSandbox for \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\"" Jan 29 16:26:37.598095 containerd[1509]: time="2025-01-29T16:26:37.598060279Z" level=info msg="Forcibly stopping sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\"" Jan 29 16:26:37.598158 containerd[1509]: time="2025-01-29T16:26:37.598131515Z" level=info msg="TearDown network for sandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" successfully" Jan 29 16:26:37.601553 containerd[1509]: time="2025-01-29T16:26:37.601516294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.601592 containerd[1509]: time="2025-01-29T16:26:37.601574635Z" level=info msg="RemovePodSandbox \"4bd5996cfc8adadcc4d86dfbd5fbbfe020cd1fac9a5eb9a279d949adb1dd4a5f\" returns successfully" Jan 29 16:26:37.601840 containerd[1509]: time="2025-01-29T16:26:37.601823288Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:37.601934 containerd[1509]: time="2025-01-29T16:26:37.601900894Z" level=info msg="TearDown network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" successfully" Jan 29 16:26:37.601969 containerd[1509]: time="2025-01-29T16:26:37.601932725Z" level=info msg="StopPodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" returns successfully" Jan 29 16:26:37.602149 containerd[1509]: time="2025-01-29T16:26:37.602133136Z" level=info msg="RemovePodSandbox for \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:37.602195 containerd[1509]: time="2025-01-29T16:26:37.602151201Z" level=info msg="Forcibly stopping sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\"" Jan 29 16:26:37.602233 containerd[1509]: time="2025-01-29T16:26:37.602208189Z" level=info msg="TearDown network for sandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" successfully" Jan 29 16:26:37.605597 containerd[1509]: time="2025-01-29T16:26:37.605577078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.605650 containerd[1509]: time="2025-01-29T16:26:37.605604921Z" level=info msg="RemovePodSandbox \"e4ba636391f4e23ae5683b4b466721cb97ee4c59149e9a14ee38c4df02f5f28e\" returns successfully" Jan 29 16:26:37.605834 containerd[1509]: time="2025-01-29T16:26:37.605810953Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" Jan 29 16:26:37.605897 containerd[1509]: time="2025-01-29T16:26:37.605882468Z" level=info msg="TearDown network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" successfully" Jan 29 16:26:37.605897 containerd[1509]: time="2025-01-29T16:26:37.605893079Z" level=info msg="StopPodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" returns successfully" Jan 29 16:26:37.606111 containerd[1509]: time="2025-01-29T16:26:37.606091115Z" level=info msg="RemovePodSandbox for \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" Jan 29 16:26:37.606142 containerd[1509]: time="2025-01-29T16:26:37.606110662Z" level=info msg="Forcibly stopping sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\"" Jan 29 16:26:37.606194 containerd[1509]: time="2025-01-29T16:26:37.606169143Z" level=info msg="TearDown network for sandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" successfully" Jan 29 16:26:37.609634 containerd[1509]: time="2025-01-29T16:26:37.609589451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.609679 containerd[1509]: time="2025-01-29T16:26:37.609635298Z" level=info msg="RemovePodSandbox \"12b2aef73338696922bc8301271d66b7d582de522a0757169f66e25b1a0d2e1b\" returns successfully" Jan 29 16:26:37.609920 containerd[1509]: time="2025-01-29T16:26:37.609898377Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\"" Jan 29 16:26:37.609992 containerd[1509]: time="2025-01-29T16:26:37.609977207Z" level=info msg="TearDown network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" successfully" Jan 29 16:26:37.610014 containerd[1509]: time="2025-01-29T16:26:37.609990443Z" level=info msg="StopPodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" returns successfully" Jan 29 16:26:37.610242 containerd[1509]: time="2025-01-29T16:26:37.610222653Z" level=info msg="RemovePodSandbox for \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\"" Jan 29 16:26:37.610276 containerd[1509]: time="2025-01-29T16:26:37.610241540Z" level=info msg="Forcibly stopping sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\"" Jan 29 16:26:37.610323 containerd[1509]: time="2025-01-29T16:26:37.610297936Z" level=info msg="TearDown network for sandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" successfully" Jan 29 16:26:37.613783 containerd[1509]: time="2025-01-29T16:26:37.613739104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.613838 containerd[1509]: time="2025-01-29T16:26:37.613787405Z" level=info msg="RemovePodSandbox \"743217462f3b1f8b1406480e696ec498648f8619d23b895ee41734d519c3cf5a\" returns successfully" Jan 29 16:26:37.614090 containerd[1509]: time="2025-01-29T16:26:37.614073559Z" level=info msg="StopPodSandbox for \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\"" Jan 29 16:26:37.614155 containerd[1509]: time="2025-01-29T16:26:37.614142921Z" level=info msg="TearDown network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" successfully" Jan 29 16:26:37.614183 containerd[1509]: time="2025-01-29T16:26:37.614154262Z" level=info msg="StopPodSandbox for \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" returns successfully" Jan 29 16:26:37.614596 containerd[1509]: time="2025-01-29T16:26:37.614572687Z" level=info msg="RemovePodSandbox for \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\"" Jan 29 16:26:37.614596 containerd[1509]: time="2025-01-29T16:26:37.614590772Z" level=info msg="Forcibly stopping sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\"" Jan 29 16:26:37.614700 containerd[1509]: time="2025-01-29T16:26:37.614662467Z" level=info msg="TearDown network for sandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" successfully" Jan 29 16:26:37.617914 containerd[1509]: time="2025-01-29T16:26:37.617882224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.617914 containerd[1509]: time="2025-01-29T16:26:37.617912532Z" level=info msg="RemovePodSandbox \"eea8718b7698bf1ccc016d1af3b975f7932c717850c6e849372d629fe30303d1\" returns successfully" Jan 29 16:26:37.618293 containerd[1509]: time="2025-01-29T16:26:37.618143941Z" level=info msg="StopPodSandbox for \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\"" Jan 29 16:26:37.618293 containerd[1509]: time="2025-01-29T16:26:37.618221418Z" level=info msg="TearDown network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\" successfully" Jan 29 16:26:37.618293 containerd[1509]: time="2025-01-29T16:26:37.618230295Z" level=info msg="StopPodSandbox for \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\" returns successfully" Jan 29 16:26:37.618492 containerd[1509]: time="2025-01-29T16:26:37.618468177Z" level=info msg="RemovePodSandbox for \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\"" Jan 29 16:26:37.618492 containerd[1509]: time="2025-01-29T16:26:37.618487494Z" level=info msg="Forcibly stopping sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\"" Jan 29 16:26:37.618579 containerd[1509]: time="2025-01-29T16:26:37.618549672Z" level=info msg="TearDown network for sandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\" successfully" Jan 29 16:26:37.621835 containerd[1509]: time="2025-01-29T16:26:37.621809905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:26:37.621871 containerd[1509]: time="2025-01-29T16:26:37.621844300Z" level=info msg="RemovePodSandbox \"6e0d4096cf670ac00285477a22eafcb153c182a1c097582a5647e17c0a570683\" returns successfully" Jan 29 16:26:37.667653 sshd[5881]: Connection closed by 10.0.0.1 port 41180 Jan 29 16:26:37.668038 sshd-session[5877]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:37.672544 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:41180.service: Deactivated successfully. Jan 29 16:26:37.674710 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:26:37.675624 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:26:37.676637 systemd-logind[1494]: Removed session 17. Jan 29 16:26:42.693788 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:41182.service - OpenSSH per-connection server daemon (10.0.0.1:41182). Jan 29 16:26:42.740880 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 41182 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:42.744525 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:42.750274 systemd-logind[1494]: New session 18 of user core. Jan 29 16:26:42.761758 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:26:42.774752 kubelet[2623]: E0129 16:26:42.774719 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:42.896028 sshd[5921]: Connection closed by 10.0.0.1 port 41182 Jan 29 16:26:42.896456 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:42.909564 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:41182.service: Deactivated successfully. Jan 29 16:26:42.911567 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:26:42.913218 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:26:42.914749 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:41190.service - OpenSSH per-connection server daemon (10.0.0.1:41190). Jan 29 16:26:42.915519 systemd-logind[1494]: Removed session 18. Jan 29 16:26:42.959348 sshd[5933]: Accepted publickey for core from 10.0.0.1 port 41190 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:42.960852 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:42.965275 systemd-logind[1494]: New session 19 of user core. Jan 29 16:26:42.974536 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:26:43.426362 sshd[5936]: Connection closed by 10.0.0.1 port 41190 Jan 29 16:26:43.426911 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:43.436159 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:41190.service: Deactivated successfully. Jan 29 16:26:43.438111 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:26:43.439794 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:26:43.452675 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:41206.service - OpenSSH per-connection server daemon (10.0.0.1:41206). Jan 29 16:26:43.453642 systemd-logind[1494]: Removed session 19. Jan 29 16:26:43.493342 sshd[5946]: Accepted publickey for core from 10.0.0.1 port 41206 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:43.494865 sshd-session[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:43.499456 systemd-logind[1494]: New session 20 of user core. Jan 29 16:26:43.509536 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:26:45.140421 sshd[5949]: Connection closed by 10.0.0.1 port 41206 Jan 29 16:26:45.141023 sshd-session[5946]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:45.157689 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:41206.service: Deactivated successfully. Jan 29 16:26:45.160797 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:26:45.161300 systemd[1]: session-20.scope: Consumed 612ms CPU time, 73M memory peak. Jan 29 16:26:45.163846 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:26:45.174771 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:41210.service - OpenSSH per-connection server daemon (10.0.0.1:41210). Jan 29 16:26:45.176675 systemd-logind[1494]: Removed session 20. Jan 29 16:26:45.218002 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 41210 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:45.219518 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:45.224274 systemd-logind[1494]: New session 21 of user core. Jan 29 16:26:45.235549 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:26:45.450793 sshd[5972]: Connection closed by 10.0.0.1 port 41210 Jan 29 16:26:45.451080 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:45.463923 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:41210.service: Deactivated successfully. Jan 29 16:26:45.466292 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:26:45.467935 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:26:45.477391 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:41224.service - OpenSSH per-connection server daemon (10.0.0.1:41224). Jan 29 16:26:45.478446 systemd-logind[1494]: Removed session 21. Jan 29 16:26:45.514457 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 41224 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:45.515947 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:45.520094 systemd-logind[1494]: New session 22 of user core. Jan 29 16:26:45.529525 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:26:45.643034 sshd[5986]: Connection closed by 10.0.0.1 port 41224 Jan 29 16:26:45.643367 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:45.646917 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:41224.service: Deactivated successfully. Jan 29 16:26:45.648961 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:26:45.649715 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:26:45.650607 systemd-logind[1494]: Removed session 22. Jan 29 16:26:50.658750 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:36460.service - OpenSSH per-connection server daemon (10.0.0.1:36460). Jan 29 16:26:50.707470 sshd[6000]: Accepted publickey for core from 10.0.0.1 port 36460 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:50.709572 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:50.714972 systemd-logind[1494]: New session 23 of user core. Jan 29 16:26:50.723613 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:26:50.832214 sshd[6002]: Connection closed by 10.0.0.1 port 36460 Jan 29 16:26:50.832627 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:50.836523 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:36460.service: Deactivated successfully. Jan 29 16:26:50.838768 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:26:50.839459 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:26:50.840460 systemd-logind[1494]: Removed session 23. Jan 29 16:26:53.130619 kubelet[2623]: I0129 16:26:53.130557 2623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:26:55.856699 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:36470.service - OpenSSH per-connection server daemon (10.0.0.1:36470). Jan 29 16:26:55.893229 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 36470 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:55.894905 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:55.898729 systemd-logind[1494]: New session 24 of user core. Jan 29 16:26:55.913531 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:26:56.022763 sshd[6030]: Connection closed by 10.0.0.1 port 36470 Jan 29 16:26:56.023187 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:56.026798 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:36470.service: Deactivated successfully. Jan 29 16:26:56.028728 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:26:56.029357 systemd-logind[1494]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:26:56.030235 systemd-logind[1494]: Removed session 24. Jan 29 16:26:56.471963 kubelet[2623]: E0129 16:26:56.471918 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:01.049743 systemd[1]: Started sshd@24-10.0.0.142:22-10.0.0.1:41490.service - OpenSSH per-connection server daemon (10.0.0.1:41490). Jan 29 16:27:01.102966 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 41490 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:01.104743 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:01.109382 systemd-logind[1494]: New session 25 of user core. Jan 29 16:27:01.118580 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:27:01.248713 sshd[6045]: Connection closed by 10.0.0.1 port 41490 Jan 29 16:27:01.249070 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:01.253025 systemd[1]: sshd@24-10.0.0.142:22-10.0.0.1:41490.service: Deactivated successfully. Jan 29 16:27:01.255469 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:27:01.256225 systemd-logind[1494]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:27:01.257219 systemd-logind[1494]: Removed session 25. Jan 29 16:27:02.471766 kubelet[2623]: E0129 16:27:02.471710 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:06.261665 systemd[1]: Started sshd@25-10.0.0.142:22-10.0.0.1:41498.service - OpenSSH per-connection server daemon (10.0.0.1:41498). Jan 29 16:27:06.332795 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 41498 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:06.334665 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:06.339146 systemd-logind[1494]: New session 26 of user core. Jan 29 16:27:06.357527 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:27:06.473010 sshd[6079]: Connection closed by 10.0.0.1 port 41498 Jan 29 16:27:06.473382 sshd-session[6077]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:06.477196 systemd[1]: sshd@25-10.0.0.142:22-10.0.0.1:41498.service: Deactivated successfully. Jan 29 16:27:06.479239 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:27:06.479937 systemd-logind[1494]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:27:06.480808 systemd-logind[1494]: Removed session 26. Jan 29 16:27:07.471462 kubelet[2623]: E0129 16:27:07.471380 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"