Jan 13 20:42:52.906566 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:42:52.906588 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:42:52.906599 kernel: BIOS-provided physical RAM map: Jan 13 20:42:52.906606 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:42:52.906612 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:42:52.906619 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:42:52.906626 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:42:52.906633 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:42:52.906640 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:42:52.906648 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:42:52.906655 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:42:52.906661 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:42:52.906668 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:42:52.906675 kernel: NX (Execute Disable) protection: active Jan 13 20:42:52.906682 kernel: APIC: Static calls initialized Jan 13 20:42:52.906692 kernel: SMBIOS 2.8 present. Jan 13 20:42:52.906699 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:42:52.906706 kernel: Hypervisor detected: KVM Jan 13 20:42:52.906713 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:42:52.906720 kernel: kvm-clock: using sched offset of 2366196799 cycles Jan 13 20:42:52.906728 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:42:52.906735 kernel: tsc: Detected 2794.750 MHz processor Jan 13 20:42:52.906743 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:42:52.906750 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:42:52.906758 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:42:52.906767 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:42:52.906775 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:42:52.906782 kernel: Using GB pages for direct mapping Jan 13 20:42:52.906789 kernel: ACPI: Early table checksum verification disabled Jan 13 20:42:52.906796 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:42:52.906803 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:52.906810 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:52.906818 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:52.906827 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:42:52.906834 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:52.906841 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:52.906848 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:52.906856 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:52.906863 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:42:52.906870 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:42:52.906881 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:42:52.906890 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:42:52.906898 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:42:52.906905 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:42:52.906913 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:42:52.906920 kernel: No NUMA configuration found Jan 13 20:42:52.906927 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:42:52.906935 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:42:52.906944 kernel: Zone ranges: Jan 13 20:42:52.906952 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:42:52.906959 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:42:52.906966 kernel: Normal empty Jan 13 20:42:52.906974 kernel: Movable zone start for each node Jan 13 20:42:52.906981 kernel: Early memory node ranges Jan 13 20:42:52.906988 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:42:52.906996 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:42:52.907003 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:42:52.907025 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:42:52.907033 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:42:52.907041 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:42:52.907048 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:42:52.907056 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:42:52.907063 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:42:52.907070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:42:52.907078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:42:52.907085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:42:52.907095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:42:52.907103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:42:52.907110 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:42:52.907118 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:42:52.907125 kernel: TSC deadline timer available Jan 13 20:42:52.907133 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:42:52.907140 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:42:52.907147 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:42:52.907155 kernel: kvm-guest: setup PV sched yield Jan 13 20:42:52.907162 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:42:52.907172 kernel: Booting paravirtualized kernel on KVM Jan 13 20:42:52.907180 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:42:52.907187 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:42:52.907195 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:42:52.907202 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:42:52.907210 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:42:52.907217 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:42:52.907224 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:42:52.907233 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:42:52.907243 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:42:52.907251 kernel: random: crng init done Jan 13 20:42:52.907258 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:42:52.907266 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:42:52.907273 kernel: Fallback order for Node 0: 0 Jan 13 20:42:52.907280 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:42:52.907288 kernel: Policy zone: DMA32 Jan 13 20:42:52.907295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:42:52.907306 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 138948K reserved, 0K cma-reserved) Jan 13 20:42:52.907313 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:42:52.907328 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:42:52.907343 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:42:52.907358 kernel: Dynamic Preempt: voluntary Jan 13 20:42:52.907365 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:42:52.907385 kernel: rcu: RCU event tracing is enabled. Jan 13 20:42:52.907394 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:42:52.907402 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:42:52.907412 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:42:52.907419 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:42:52.907427 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:42:52.907438 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:42:52.907445 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:42:52.907453 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:42:52.907460 kernel: Console: colour VGA+ 80x25 Jan 13 20:42:52.907467 kernel: printk: console [ttyS0] enabled Jan 13 20:42:52.907475 kernel: ACPI: Core revision 20230628 Jan 13 20:42:52.907485 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:42:52.907492 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:42:52.907500 kernel: x2apic enabled Jan 13 20:42:52.907507 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:42:52.907515 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:42:52.907522 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:42:52.907530 kernel: kvm-guest: setup PV IPIs Jan 13 20:42:52.907547 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:42:52.907555 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:42:52.907563 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 13 20:42:52.907570 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:42:52.907578 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:42:52.907588 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:42:52.907596 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:42:52.907604 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:42:52.907612 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:42:52.907622 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:42:52.907630 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:42:52.907637 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:42:52.907645 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:42:52.907653 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:42:52.907661 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:42:52.907669 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:42:52.907677 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:42:52.907685 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:42:52.907695 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:42:52.907703 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:42:52.907710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:42:52.907718 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:42:52.907726 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:42:52.907733 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:42:52.907741 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:42:52.907749 kernel: landlock: Up and running. Jan 13 20:42:52.907756 kernel: SELinux: Initializing. Jan 13 20:42:52.907766 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:42:52.907774 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:42:52.907782 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:42:52.907790 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:42:52.907798 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:42:52.907806 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:42:52.907813 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:42:52.907821 kernel: ... version: 0 Jan 13 20:42:52.907831 kernel: ... bit width: 48 Jan 13 20:42:52.907839 kernel: ... generic registers: 6 Jan 13 20:42:52.907846 kernel: ... value mask: 0000ffffffffffff Jan 13 20:42:52.907854 kernel: ... max period: 00007fffffffffff Jan 13 20:42:52.907862 kernel: ... fixed-purpose events: 0 Jan 13 20:42:52.907869 kernel: ... event mask: 000000000000003f Jan 13 20:42:52.907877 kernel: signal: max sigframe size: 1776 Jan 13 20:42:52.907884 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:42:52.907892 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:42:52.907900 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:42:52.907910 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:42:52.907918 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:42:52.907925 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:42:52.907933 kernel: smpboot: Max logical packages: 1 Jan 13 20:42:52.907941 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 13 20:42:52.907948 kernel: devtmpfs: initialized Jan 13 20:42:52.907956 kernel: x86/mm: Memory block size: 128MB Jan 13 20:42:52.907964 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:42:52.907971 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:42:52.907981 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:42:52.907989 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:42:52.907997 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:42:52.908005 kernel: audit: type=2000 audit(1736800971.904:1): state=initialized audit_enabled=0 res=1 Jan 13 20:42:52.908023 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:42:52.908031 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:42:52.908039 kernel: cpuidle: using governor menu Jan 13 20:42:52.908046 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:42:52.908054 kernel: dca service started, version 1.12.1 Jan 13 20:42:52.908064 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:42:52.908072 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:42:52.908080 kernel: PCI: Using configuration type 1 for base access Jan 13 20:42:52.908088 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:42:52.908096 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:42:52.908103 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:42:52.908111 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:42:52.908119 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:42:52.908126 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:42:52.908137 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:42:52.908144 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:42:52.908152 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:42:52.908160 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:42:52.908167 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:42:52.908175 kernel: ACPI: Interpreter enabled Jan 13 20:42:52.908183 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:42:52.908190 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:42:52.908198 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:42:52.908208 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:42:52.908216 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:42:52.908224 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:42:52.908407 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:42:52.908538 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:42:52.908659 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:42:52.908670 kernel: PCI host bridge to bus 0000:00 Jan 13 20:42:52.908797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:42:52.908909 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:42:52.909061 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:42:52.909178 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:42:52.909289 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:42:52.909411 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:42:52.909524 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:42:52.909668 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:42:52.909803 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:42:52.909924 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:42:52.910061 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:42:52.910183 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:42:52.910304 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:42:52.910454 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:42:52.910578 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:42:52.910698 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:42:52.910818 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:42:52.910949 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:42:52.911114 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:42:52.911237 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:42:52.911360 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:42:52.911501 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:42:52.911629 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:42:52.911750 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:42:52.911869 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:42:52.911988 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:42:52.912131 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:42:52.912260 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:42:52.912399 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:42:52.912522 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:42:52.912644 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:42:52.912772 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:42:52.912893 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:42:52.912904 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:42:52.912916 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:42:52.912924 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:42:52.912932 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:42:52.912940 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:42:52.912948 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:42:52.912955 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:42:52.912963 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:42:52.912971 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:42:52.912981 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:42:52.912989 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:42:52.912997 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:42:52.913005 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:42:52.913077 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:42:52.913086 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:42:52.913094 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:42:52.913102 kernel: iommu: Default domain type: Translated Jan 13 20:42:52.913110 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:42:52.913121 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:42:52.913129 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:42:52.913137 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:42:52.913144 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:42:52.913268 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:42:52.913397 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:42:52.913517 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:42:52.913528 kernel: vgaarb: loaded Jan 13 20:42:52.913536 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:42:52.913547 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:42:52.913555 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:42:52.913563 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:42:52.913571 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:42:52.913579 kernel: pnp: PnP ACPI init Jan 13 20:42:52.913711 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:42:52.913723 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:42:52.913731 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:42:52.913743 kernel: NET: Registered PF_INET protocol family Jan 13 20:42:52.913751 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:42:52.913759 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:42:52.913766 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:42:52.913774 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:42:52.913782 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:42:52.913790 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:42:52.913798 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:42:52.913805 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:42:52.913816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:42:52.913823 kernel: NET: Registered PF_XDP protocol family Jan 13 20:42:52.913933 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:42:52.914057 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:42:52.914167 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:42:52.914276 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:42:52.914395 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:42:52.914506 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:42:52.914521 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:42:52.914529 kernel: Initialise system trusted keyrings Jan 13 20:42:52.914536 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:42:52.914544 kernel: Key type asymmetric registered Jan 13 20:42:52.914552 kernel: Asymmetric key parser 'x509' registered Jan 13 20:42:52.914560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:42:52.914568 kernel: io scheduler mq-deadline registered Jan 13 20:42:52.914576 kernel: io scheduler kyber registered Jan 13 20:42:52.914583 kernel: io scheduler bfq registered Jan 13 20:42:52.914594 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:42:52.914602 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:42:52.914610 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:42:52.914618 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:42:52.914625 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:42:52.914633 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:42:52.914641 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:42:52.914649 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:42:52.914657 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:42:52.914668 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:42:52.914790 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:42:52.914904 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:42:52.915052 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:42:52 UTC (1736800972) Jan 13 20:42:52.915168 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:42:52.915179 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:42:52.915187 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:42:52.915194 kernel: Segment Routing with IPv6 Jan 13 20:42:52.915206 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:42:52.915214 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:42:52.915222 kernel: Key type dns_resolver registered Jan 13 20:42:52.915229 kernel: IPI shorthand broadcast: enabled Jan 13 20:42:52.915237 kernel: sched_clock: Marking stable (652002785, 113901996)->(783583208, -17678427) Jan 13 20:42:52.915245 kernel: registered taskstats version 1 Jan 13 20:42:52.915253 kernel: Loading compiled-in X.509 certificates Jan 13 20:42:52.915261 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:42:52.915269 kernel: Key type .fscrypt registered Jan 13 20:42:52.915279 kernel: Key type fscrypt-provisioning registered Jan 13 20:42:52.915286 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:42:52.915294 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:42:52.915302 kernel: ima: No architecture policies found Jan 13 20:42:52.915310 kernel: clk: Disabling unused clocks Jan 13 20:42:52.915318 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:42:52.915325 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:42:52.915333 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:42:52.915341 kernel: Run /init as init process Jan 13 20:42:52.915351 kernel: with arguments: Jan 13 20:42:52.915359 kernel: /init Jan 13 20:42:52.915366 kernel: with environment: Jan 13 20:42:52.915374 kernel: HOME=/ Jan 13 20:42:52.915392 kernel: TERM=linux Jan 13 20:42:52.915401 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:42:52.915411 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:42:52.915421 systemd[1]: Detected virtualization kvm. Jan 13 20:42:52.915433 systemd[1]: Detected architecture x86-64. Jan 13 20:42:52.915441 systemd[1]: Running in initrd. Jan 13 20:42:52.915449 systemd[1]: No hostname configured, using default hostname. Jan 13 20:42:52.915457 systemd[1]: Hostname set to . Jan 13 20:42:52.915466 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:42:52.915474 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:42:52.915483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:42:52.915491 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:42:52.915503 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:42:52.915523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:42:52.915534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:42:52.915543 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:42:52.915554 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:42:52.915565 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:42:52.915573 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:42:52.915582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:42:52.915590 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:42:52.915599 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:42:52.915607 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:42:52.915616 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:42:52.915624 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:42:52.915635 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:42:52.915644 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:42:52.915652 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:42:52.915663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:42:52.915672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:42:52.915681 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:42:52.915689 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:42:52.915698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:42:52.915709 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:42:52.915717 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:42:52.915726 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:42:52.915734 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:42:52.915743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:42:52.915752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:42:52.915760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:42:52.915769 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:42:52.915777 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:42:52.915805 systemd-journald[194]: Collecting audit messages is disabled. Jan 13 20:42:52.915828 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:42:52.915840 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:42:52.915848 systemd-journald[194]: Journal started Jan 13 20:42:52.915869 systemd-journald[194]: Runtime Journal (/run/log/journal/cbe7550c44e4479eabaf5b8b15f86922) is 6.0M, max 48.3M, 42.3M free. Jan 13 20:42:52.904283 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:42:52.950255 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:42:52.950273 kernel: Bridge firewalling registered Jan 13 20:42:52.950283 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:42:52.937355 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:42:52.946278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:42:52.946860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:52.961239 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:42:52.963133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:42:52.966138 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:42:52.971155 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:42:52.978226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:52.981233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:42:52.983921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:42:52.986591 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:42:52.998182 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:42:53.001424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:42:53.011191 dracut-cmdline[230]: dracut-dracut-053 Jan 13 20:42:53.014755 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:42:53.039074 systemd-resolved[233]: Positive Trust Anchors: Jan 13 20:42:53.039087 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:42:53.039125 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:42:53.041958 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 13 20:42:53.043080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:42:53.049311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:42:53.114063 kernel: SCSI subsystem initialized Jan 13 20:42:53.124043 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:42:53.135051 kernel: iscsi: registered transport (tcp) Jan 13 20:42:53.157434 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:42:53.157495 kernel: QLogic iSCSI HBA Driver Jan 13 20:42:53.203747 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:42:53.217153 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:42:53.245996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:42:53.246125 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:42:53.246144 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:42:53.291050 kernel: raid6: avx2x4 gen() 29878 MB/s Jan 13 20:42:53.308041 kernel: raid6: avx2x2 gen() 29395 MB/s Jan 13 20:42:53.325147 kernel: raid6: avx2x1 gen() 25066 MB/s Jan 13 20:42:53.325173 kernel: raid6: using algorithm avx2x4 gen() 29878 MB/s Jan 13 20:42:53.343152 kernel: raid6: .... xor() 7417 MB/s, rmw enabled Jan 13 20:42:53.343180 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:42:53.364041 kernel: xor: automatically using best checksumming function avx Jan 13 20:42:53.521044 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:42:53.535301 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:42:53.545208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:42:53.560076 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 13 20:42:53.565941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:42:53.573199 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:42:53.586843 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 13 20:42:53.621929 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:42:53.629275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:42:53.693888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:42:53.707129 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:42:53.715438 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:42:53.719394 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:42:53.723880 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:42:53.746643 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:42:53.746803 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:42:53.746816 kernel: GPT:9289727 != 19775487 Jan 13 20:42:53.746834 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:42:53.746844 kernel: GPT:9289727 != 19775487 Jan 13 20:42:53.746854 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:42:53.746864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:42:53.746875 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:42:53.724204 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:42:53.727247 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:42:53.737657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:42:53.751033 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:42:53.760466 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:42:53.763825 kernel: libata version 3.00 loaded. Jan 13 20:42:53.760590 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:53.767988 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:42:53.768008 kernel: AES CTR mode by8 optimization enabled Jan 13 20:42:53.765188 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:42:53.767371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:42:53.767538 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:53.771846 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:42:53.778552 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:42:53.800622 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:42:53.800644 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:42:53.800810 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:42:53.801348 kernel: scsi host0: ahci Jan 13 20:42:53.801523 kernel: scsi host1: ahci Jan 13 20:42:53.801673 kernel: scsi host2: ahci Jan 13 20:42:53.801818 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (467) Jan 13 20:42:53.801830 kernel: scsi host3: ahci Jan 13 20:42:53.801976 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Jan 13 20:42:53.801988 kernel: scsi host4: ahci Jan 13 20:42:53.802188 kernel: scsi host5: ahci Jan 13 20:42:53.802398 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:42:53.802414 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:42:53.802428 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:42:53.802442 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:42:53.802455 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:42:53.802469 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:42:53.784329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:42:53.806872 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:42:53.846742 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:42:53.849487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:53.857993 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:42:53.860090 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:42:53.865555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:42:53.878144 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:42:53.884177 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:42:53.907879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:54.108667 disk-uuid[557]: Primary Header is updated. Jan 13 20:42:54.108667 disk-uuid[557]: Secondary Entries is updated. Jan 13 20:42:54.108667 disk-uuid[557]: Secondary Header is updated. Jan 13 20:42:54.127639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:42:54.127660 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:42:54.127671 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:42:54.127681 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:42:54.127696 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:42:54.127706 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:42:54.130099 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:42:54.133186 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:42:54.133213 kernel: ata3.00: applying bridge limits Jan 13 20:42:54.134304 kernel: ata3.00: configured for UDMA/100 Jan 13 20:42:54.138025 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:42:54.194045 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:42:54.208777 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:42:54.208791 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:42:55.134039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:42:55.134104 disk-uuid[566]: The operation has completed successfully. Jan 13 20:42:55.168890 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:42:55.169027 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:42:55.189245 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:42:55.192938 sh[594]: Success Jan 13 20:42:55.206052 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:42:55.242328 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:42:55.267641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:42:55.271989 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:42:55.283405 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:42:55.283444 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:55.283455 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:42:55.284534 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:42:55.285268 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:42:55.289836 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:42:55.292361 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:42:55.305137 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:42:55.307819 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:42:55.316123 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:55.316159 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:55.316174 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:42:55.319049 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:42:55.328328 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:42:55.330259 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:55.338976 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:42:55.347172 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:42:55.402429 ignition[688]: Ignition 2.20.0 Jan 13 20:42:55.402440 ignition[688]: Stage: fetch-offline Jan 13 20:42:55.402485 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:55.402495 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:42:55.402604 ignition[688]: parsed url from cmdline: "" Jan 13 20:42:55.402609 ignition[688]: no config URL provided Jan 13 20:42:55.402616 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:42:55.402627 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:42:55.402665 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 13 20:42:55.402672 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:42:55.410353 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 13 20:42:55.438790 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:42:55.447160 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:42:55.454881 ignition[688]: parsing config with SHA512: b368d0ef0ceb435791a903c1514a61c23349768323a0499ae92734fc3a73f308988a9dfca8f1ef97c9feb42c6e758deb52c76b33ba6ae5ac76373696b9de93c4 Jan 13 20:42:55.458773 unknown[688]: fetched base config from "system" Jan 13 20:42:55.458786 unknown[688]: fetched user config from "qemu" Jan 13 20:42:55.460371 ignition[688]: fetch-offline: fetch-offline passed Jan 13 20:42:55.460503 ignition[688]: Ignition finished successfully Jan 13 20:42:55.463382 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:42:55.475661 systemd-networkd[784]: lo: Link UP Jan 13 20:42:55.475672 systemd-networkd[784]: lo: Gained carrier Jan 13 20:42:55.478643 systemd-networkd[784]: Enumeration completed Jan 13 20:42:55.478740 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:42:55.479084 systemd[1]: Reached target network.target - Network. Jan 13 20:42:55.479561 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:42:55.484056 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:42:55.484060 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:42:55.485509 systemd-networkd[784]: eth0: Link UP Jan 13 20:42:55.485513 systemd-networkd[784]: eth0: Gained carrier Jan 13 20:42:55.485522 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:42:55.487160 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:42:55.500738 ignition[787]: Ignition 2.20.0 Jan 13 20:42:55.500753 ignition[787]: Stage: kargs Jan 13 20:42:55.501097 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:42:55.500950 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:55.500965 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:42:55.502003 ignition[787]: kargs: kargs passed Jan 13 20:42:55.502075 ignition[787]: Ignition finished successfully Jan 13 20:42:55.506564 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:42:55.523348 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:42:55.534706 ignition[798]: Ignition 2.20.0 Jan 13 20:42:55.534719 ignition[798]: Stage: disks Jan 13 20:42:55.534889 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:55.534901 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:42:55.535724 ignition[798]: disks: disks passed Jan 13 20:42:55.535766 ignition[798]: Ignition finished successfully Jan 13 20:42:55.541769 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:42:55.543109 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:42:55.544962 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:42:55.545066 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:42:55.545415 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:42:55.545754 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:42:55.557167 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:42:55.571282 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:42:55.579234 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:42:55.591228 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:42:55.680034 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:42:55.680003 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:42:55.682320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:42:55.696117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:42:55.699147 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:42:55.702045 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:42:55.704091 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:42:55.712570 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) Jan 13 20:42:55.712594 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:55.712609 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:55.712623 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:42:55.712638 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:42:55.704131 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:42:55.715247 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:42:55.717289 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:42:55.721089 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:42:55.756836 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:42:55.760766 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:42:55.764480 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:42:55.768209 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:42:55.857090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:42:55.866189 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:42:55.870196 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:42:55.876051 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:55.896406 ignition[929]: INFO : Ignition 2.20.0 Jan 13 20:42:55.896406 ignition[929]: INFO : Stage: mount Jan 13 20:42:55.898318 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:55.898318 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:42:55.898318 ignition[929]: INFO : mount: mount passed Jan 13 20:42:55.898318 ignition[929]: INFO : Ignition finished successfully Jan 13 20:42:55.899041 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:42:55.908174 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:42:55.910300 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:42:56.282535 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:42:56.294160 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:42:56.304044 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Jan 13 20:42:56.306153 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:56.306184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:56.306199 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:42:56.309043 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:42:56.310654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:42:56.334224 ignition[961]: INFO : Ignition 2.20.0 Jan 13 20:42:56.334224 ignition[961]: INFO : Stage: files Jan 13 20:42:56.336052 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:56.336052 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:42:56.338686 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:42:56.339965 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:42:56.339965 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:42:56.343884 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:42:56.345353 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:42:56.347056 unknown[961]: wrote ssh authorized keys file for user: core Jan 13 20:42:56.348229 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:42:56.350661 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:42:56.352546 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:42:56.390534 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:42:56.463499 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:42:56.463499 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:42:56.468144 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:42:56.602212 systemd-networkd[784]: eth0: Gained IPv6LL Jan 13 20:42:57.060723 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:42:57.304224 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:42:57.304224 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:42:57.308459 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:42:57.329563 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:42:57.335328 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:42:57.337147 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:42:57.337147 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:42:57.337147 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:42:57.337147 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:42:57.337147 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:42:57.337147 ignition[961]: INFO : files: files passed Jan 13 20:42:57.337147 ignition[961]: INFO : Ignition finished successfully Jan 13 20:42:57.338352 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:42:57.346264 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:42:57.350079 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:42:57.352134 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:42:57.352297 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:42:57.361238 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:42:57.364076 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:42:57.364076 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:42:57.368729 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:42:57.366928 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:42:57.369473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:42:57.379204 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:42:57.406279 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:42:57.406456 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:42:57.407652 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:42:57.409788 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:42:57.411754 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:42:57.413771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:42:57.433859 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:42:57.446184 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:42:57.455510 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:42:57.457460 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:42:57.459734 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:42:57.461792 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:42:57.461895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:42:57.464313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:42:57.465976 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:42:57.471861 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:42:57.473922 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:42:57.477798 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:42:57.480045 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:42:57.482213 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:42:57.484615 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:42:57.486977 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:42:57.489396 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:42:57.492840 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:42:57.493043 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:42:57.495758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:42:57.497476 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:42:57.499884 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:42:57.500140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:42:57.502668 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:42:57.502914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:42:57.505236 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:42:57.505407 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:42:57.507484 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:42:57.509675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:42:57.509871 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:42:57.512377 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:42:57.514316 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:42:57.516353 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:42:57.516471 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:42:57.518414 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:42:57.518506 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:42:57.520869 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:42:57.521037 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:42:57.523098 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:42:57.523287 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:42:57.536248 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:42:57.538064 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:42:57.538230 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:42:57.541972 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:42:57.543065 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:42:57.543396 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:42:57.546258 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:42:57.548894 ignition[1016]: INFO : Ignition 2.20.0 Jan 13 20:42:57.548894 ignition[1016]: INFO : Stage: umount Jan 13 20:42:57.548894 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:57.548894 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:42:57.547628 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:42:57.550534 ignition[1016]: INFO : umount: umount passed Jan 13 20:42:57.550534 ignition[1016]: INFO : Ignition finished successfully Jan 13 20:42:57.568989 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:42:57.569141 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:42:57.573589 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:42:57.576815 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:42:57.577937 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:42:57.582836 systemd[1]: Stopped target network.target - Network. Jan 13 20:42:57.584631 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:42:57.585589 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:42:57.587888 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:42:57.587945 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:42:57.591347 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:42:57.591406 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:42:57.594610 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:42:57.594671 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:42:57.598179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:42:57.600798 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:42:57.606078 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 13 20:42:57.608644 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:42:57.608839 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:42:57.611663 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:42:57.611834 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:42:57.616054 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:42:57.616151 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:42:57.630193 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:42:57.654945 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:42:57.655080 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:42:57.657791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:42:57.657852 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:42:57.686117 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:42:57.686174 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:42:57.689001 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:42:57.689073 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:42:57.690737 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:42:57.723519 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:42:57.723722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:42:58.206848 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:42:58.206981 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:42:58.210405 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:42:58.210489 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:42:58.212920 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:42:58.212974 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:42:58.215399 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:42:58.215457 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:42:58.238662 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:42:58.238730 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:42:58.240683 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:42:58.240739 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:58.280282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:42:58.281539 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:42:58.281610 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:42:58.284083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:42:58.284135 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:58.289343 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:42:58.289470 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:42:58.373607 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:42:58.373780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:42:58.375222 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:42:58.377442 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:42:58.377498 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:42:58.389225 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:42:58.397846 systemd[1]: Switching root. Jan 13 20:42:58.429983 systemd-journald[194]: Journal stopped Jan 13 20:42:59.916558 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 13 20:42:59.916639 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:42:59.916670 kernel: SELinux: policy capability open_perms=1 Jan 13 20:42:59.916685 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:42:59.916703 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:42:59.916720 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:42:59.916736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:42:59.916755 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:42:59.916770 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:42:59.916785 kernel: audit: type=1403 audit(1736800979.037:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:42:59.916804 systemd[1]: Successfully loaded SELinux policy in 41.867ms. Jan 13 20:42:59.916827 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.626ms. Jan 13 20:42:59.916845 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:42:59.916861 systemd[1]: Detected virtualization kvm. Jan 13 20:42:59.916877 systemd[1]: Detected architecture x86-64. Jan 13 20:42:59.916892 systemd[1]: Detected first boot. Jan 13 20:42:59.916907 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:42:59.916923 zram_generator::config[1062]: No configuration found. Jan 13 20:42:59.916945 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:42:59.916966 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:42:59.916982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:42:59.916998 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:42:59.917028 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:42:59.917045 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:42:59.917060 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:42:59.917078 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:42:59.917094 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:42:59.917113 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:42:59.917130 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:42:59.917145 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:42:59.917161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:42:59.917178 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:42:59.917193 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:42:59.917218 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:42:59.917233 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:42:59.917250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:42:59.917269 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:42:59.917285 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:42:59.917301 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:42:59.917317 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:42:59.917333 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:42:59.917349 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:42:59.917365 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:42:59.917383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:42:59.917398 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:42:59.917414 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:42:59.917431 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:42:59.917447 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:42:59.917463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:42:59.917478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:42:59.917494 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:42:59.917509 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:42:59.917525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:42:59.917544 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:42:59.917559 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:42:59.917576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:42:59.917591 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:42:59.917607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:42:59.917622 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:42:59.917639 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:42:59.917655 systemd[1]: Reached target machines.target - Containers. Jan 13 20:42:59.917674 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:42:59.917690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:42:59.917705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:42:59.917721 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:42:59.917736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:42:59.917752 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:42:59.917769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:42:59.917786 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:42:59.917801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:42:59.917821 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:42:59.917837 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:42:59.917858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:42:59.917873 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:42:59.917888 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:42:59.917903 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:42:59.917919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:42:59.917936 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:42:59.917954 kernel: fuse: init (API version 7.39) Jan 13 20:42:59.917969 kernel: loop: module loaded Jan 13 20:42:59.917984 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:42:59.918000 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:42:59.918029 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:42:59.918045 systemd[1]: Stopped verity-setup.service. Jan 13 20:42:59.918061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:42:59.918101 systemd-journald[1132]: Collecting audit messages is disabled. Jan 13 20:42:59.918132 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:42:59.918148 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:42:59.918166 systemd-journald[1132]: Journal started Jan 13 20:42:59.918194 systemd-journald[1132]: Runtime Journal (/run/log/journal/cbe7550c44e4479eabaf5b8b15f86922) is 6.0M, max 48.3M, 42.3M free. Jan 13 20:42:59.650406 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:42:59.678049 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:42:59.678644 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:42:59.679161 systemd[1]: systemd-journald.service: Consumed 1.104s CPU time. Jan 13 20:42:59.921182 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:42:59.921224 kernel: ACPI: bus type drm_connector registered Jan 13 20:42:59.924784 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:42:59.926139 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:42:59.927846 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:42:59.929372 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:42:59.930760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:42:59.932456 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:42:59.932634 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:42:59.934297 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:42:59.935829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:42:59.935997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:42:59.937648 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:42:59.937824 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:42:59.939503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:42:59.939673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:42:59.941279 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:42:59.941474 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:42:59.943011 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:42:59.943226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:42:59.944689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:42:59.946221 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:42:59.947930 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:42:59.962579 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:42:59.974161 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:42:59.977036 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:42:59.978425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:42:59.978459 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:42:59.980840 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:42:59.983362 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:42:59.987065 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:42:59.988428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:42:59.991148 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:42:59.995892 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:42:59.997540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:42:59.998889 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:43:00.000541 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:43:00.003574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:43:00.008231 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:43:00.013371 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:43:00.018496 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:43:00.020822 systemd-journald[1132]: Time spent on flushing to /var/log/journal/cbe7550c44e4479eabaf5b8b15f86922 is 21.157ms for 951 entries. Jan 13 20:43:00.020822 systemd-journald[1132]: System Journal (/var/log/journal/cbe7550c44e4479eabaf5b8b15f86922) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:43:00.064923 systemd-journald[1132]: Received client request to flush runtime journal. Jan 13 20:43:00.064969 kernel: loop0: detected capacity change from 0 to 211296 Jan 13 20:43:00.025652 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:43:00.027994 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:43:00.032592 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:43:00.047428 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:43:00.050647 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:43:00.055073 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:43:00.058313 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:43:00.073063 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:43:00.073315 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:43:00.075625 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:43:00.081315 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:43:00.097194 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:43:00.105391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:43:00.118051 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:43:00.130462 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 13 20:43:00.130916 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 13 20:43:00.139067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:43:00.156052 kernel: loop2: detected capacity change from 0 to 141000 Jan 13 20:43:00.163747 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:43:00.164702 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:43:00.205039 kernel: loop3: detected capacity change from 0 to 211296 Jan 13 20:43:00.215270 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:43:00.232394 kernel: loop5: detected capacity change from 0 to 141000 Jan 13 20:43:00.246781 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:43:00.247509 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 13 20:43:00.254031 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:43:00.254053 systemd[1]: Reloading... Jan 13 20:43:00.321401 zram_generator::config[1226]: No configuration found. Jan 13 20:43:00.398787 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:43:00.459731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:43:00.512360 systemd[1]: Reloading finished in 257 ms. Jan 13 20:43:00.552720 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:43:00.583380 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:43:00.596324 systemd[1]: Starting ensure-sysext.service... Jan 13 20:43:00.598807 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:43:00.604633 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:43:00.604648 systemd[1]: Reloading... Jan 13 20:43:00.622257 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:43:00.622560 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:43:00.623577 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:43:00.623874 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 13 20:43:00.623953 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 13 20:43:00.627988 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:43:00.628001 systemd-tmpfiles[1264]: Skipping /boot Jan 13 20:43:00.643463 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:43:00.643578 systemd-tmpfiles[1264]: Skipping /boot Jan 13 20:43:00.667046 zram_generator::config[1293]: No configuration found. Jan 13 20:43:00.932958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:43:00.989807 systemd[1]: Reloading finished in 384 ms. Jan 13 20:43:01.009555 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:43:01.023916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:43:01.035898 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:43:01.038823 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:43:01.041573 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:43:01.046431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:43:01.050295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:43:01.055388 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:43:01.061975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:43:01.062261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:43:01.070357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:43:01.077254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:43:01.084397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:43:01.087310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:43:01.089993 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:43:01.091717 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:43:01.093630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:43:01.093836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:43:01.094992 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 13 20:43:01.096137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:43:01.096371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:43:01.098807 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:43:01.101386 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:43:01.101659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:43:01.111862 augenrules[1359]: No rules Jan 13 20:43:01.114646 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:43:01.114987 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:43:01.121135 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:43:01.128261 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:43:01.137385 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:43:01.138946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:43:01.141346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:43:01.147438 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:43:01.155304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:43:01.160974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:43:01.165381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:43:01.167249 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:43:01.168453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:43:01.169537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:43:01.173756 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:43:01.175852 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:43:01.179486 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:43:01.179918 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:43:01.182308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:43:01.183080 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:43:01.185777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:43:01.185954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:43:01.189030 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:43:01.191388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:43:01.191571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:43:01.201456 systemd[1]: Finished ensure-sysext.service. Jan 13 20:43:01.213036 augenrules[1369]: /sbin/augenrules: No change Jan 13 20:43:01.217602 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:43:01.230353 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:43:01.232291 augenrules[1425]: No rules Jan 13 20:43:01.234085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:43:01.234161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:43:01.242215 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:43:01.245053 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1402) Jan 13 20:43:01.247102 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:43:01.247584 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:43:01.247871 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:43:01.275625 systemd-resolved[1332]: Positive Trust Anchors: Jan 13 20:43:01.275653 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:43:01.275696 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:43:01.280545 systemd-resolved[1332]: Defaulting to hostname 'linux'. Jan 13 20:43:01.287166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:43:01.291519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:43:01.305204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:43:01.316276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:43:01.323056 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:43:01.323450 systemd-networkd[1423]: lo: Link UP Jan 13 20:43:01.323466 systemd-networkd[1423]: lo: Gained carrier Jan 13 20:43:01.326824 systemd-networkd[1423]: Enumeration completed Jan 13 20:43:01.326937 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:43:01.328191 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:43:01.328204 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:43:01.328320 systemd[1]: Reached target network.target - Network. Jan 13 20:43:01.330204 systemd-networkd[1423]: eth0: Link UP Jan 13 20:43:01.330275 systemd-networkd[1423]: eth0: Gained carrier Jan 13 20:43:01.330341 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:43:01.335101 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:43:01.335184 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:43:01.337930 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:43:01.344089 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:43:01.349112 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:43:01.355775 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:43:01.356054 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:43:01.350318 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:43:01.351793 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:43:01.352003 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:43:01.352072 systemd-timesyncd[1430]: Initial clock synchronization to Mon 2025-01-13 20:43:01.416685 UTC. Jan 13 20:43:01.365712 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:43:01.406093 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:43:01.408509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:43:01.473259 kernel: kvm_amd: TSC scaling supported Jan 13 20:43:01.473344 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:43:01.473364 kernel: kvm_amd: Nested Paging enabled Jan 13 20:43:01.474523 kernel: kvm_amd: LBR virtualization supported Jan 13 20:43:01.474612 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:43:01.475279 kernel: kvm_amd: Virtual GIF supported Jan 13 20:43:01.499094 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:43:01.537845 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:43:01.556257 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:43:01.558243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:43:01.565609 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:43:01.597255 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:43:01.599343 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:43:01.600543 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:43:01.601744 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:43:01.603174 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:43:01.604684 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:43:01.605934 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:43:01.607296 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:43:01.608602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:43:01.608634 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:43:01.609602 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:43:01.611540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:43:01.614547 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:43:01.628692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:43:01.631556 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:43:01.633304 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:43:01.634648 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:43:01.635811 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:43:01.635919 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:43:01.635944 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:43:01.636997 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:43:01.639361 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:43:01.643045 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:43:01.643521 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:43:01.647307 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:43:01.648742 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:43:01.651735 jq[1461]: false Jan 13 20:43:01.653264 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:43:01.657926 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:43:01.661156 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:43:01.666204 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:43:01.673523 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:43:01.675116 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:43:01.675413 dbus-daemon[1460]: [system] SELinux support is enabled Jan 13 20:43:01.675580 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:43:01.676787 extend-filesystems[1462]: Found loop3 Jan 13 20:43:01.678386 extend-filesystems[1462]: Found loop4 Jan 13 20:43:01.678386 extend-filesystems[1462]: Found loop5 Jan 13 20:43:01.678386 extend-filesystems[1462]: Found sr0 Jan 13 20:43:01.678386 extend-filesystems[1462]: Found vda Jan 13 20:43:01.678386 extend-filesystems[1462]: Found vda1 Jan 13 20:43:01.678386 extend-filesystems[1462]: Found vda2 Jan 13 20:43:01.677702 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:43:01.684962 extend-filesystems[1462]: Found vda3 Jan 13 20:43:01.684962 extend-filesystems[1462]: Found usr Jan 13 20:43:01.684962 extend-filesystems[1462]: Found vda4 Jan 13 20:43:01.684962 extend-filesystems[1462]: Found vda6 Jan 13 20:43:01.684962 extend-filesystems[1462]: Found vda7 Jan 13 20:43:01.684962 extend-filesystems[1462]: Found vda9 Jan 13 20:43:01.684962 extend-filesystems[1462]: Checking size of /dev/vda9 Jan 13 20:43:01.680363 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:43:01.683378 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:43:01.689487 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:43:01.692917 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:43:01.696285 jq[1476]: true Jan 13 20:43:01.693209 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:43:01.693540 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:43:01.693732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:43:01.696746 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:43:01.696996 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:43:01.702778 update_engine[1474]: I20250113 20:43:01.702661 1474 main.cc:92] Flatcar Update Engine starting Jan 13 20:43:01.703961 update_engine[1474]: I20250113 20:43:01.703922 1474 update_check_scheduler.cc:74] Next update check in 9m26s Jan 13 20:43:01.711603 extend-filesystems[1462]: Resized partition /dev/vda9 Jan 13 20:43:01.714304 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:43:01.718038 jq[1484]: true Jan 13 20:43:01.718848 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:43:01.725999 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:43:01.726666 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:43:01.728439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:43:01.728459 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:43:01.729816 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:43:01.733037 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1397) Jan 13 20:43:01.738238 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:43:01.746851 tar[1480]: linux-amd64/helm Jan 13 20:43:01.791245 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:43:01.792662 systemd-logind[1473]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:43:01.792694 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:43:01.793593 systemd-logind[1473]: New seat seat0. Jan 13 20:43:01.799620 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:43:02.013332 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:43:02.229052 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:43:02.576722 extend-filesystems[1488]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:43:02.576722 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:43:02.576722 extend-filesystems[1488]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:43:02.581352 extend-filesystems[1462]: Resized filesystem in /dev/vda9 Jan 13 20:43:02.586635 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:43:02.586779 containerd[1485]: time="2025-01-13T20:43:02.577553783Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:43:02.578905 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:43:02.589615 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:43:02.579203 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:43:02.587252 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:43:02.590650 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:43:02.609330 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:43:02.609446 containerd[1485]: time="2025-01-13T20:43:02.609364965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:43:02.611999 containerd[1485]: time="2025-01-13T20:43:02.611071251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:43:02.611999 containerd[1485]: time="2025-01-13T20:43:02.611317608Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:43:02.612198 containerd[1485]: time="2025-01-13T20:43:02.612084442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:43:02.612310 containerd[1485]: time="2025-01-13T20:43:02.612287833Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:43:02.612348 containerd[1485]: time="2025-01-13T20:43:02.612327102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:43:02.612430 containerd[1485]: time="2025-01-13T20:43:02.612411739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:43:02.612469 containerd[1485]: time="2025-01-13T20:43:02.612432071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.612653006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.612683134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.612704263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.612717221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.612813786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.613145840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.613276149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.613291955Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.613390429Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:43:02.613480 containerd[1485]: time="2025-01-13T20:43:02.613447090Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:43:02.618455 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:43:02.624502 containerd[1485]: time="2025-01-13T20:43:02.624462276Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:43:02.624563 containerd[1485]: time="2025-01-13T20:43:02.624527229Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:43:02.624563 containerd[1485]: time="2025-01-13T20:43:02.624550064Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:43:02.624614 containerd[1485]: time="2025-01-13T20:43:02.624569810Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:43:02.624614 containerd[1485]: time="2025-01-13T20:43:02.624589050Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:43:02.624898 containerd[1485]: time="2025-01-13T20:43:02.624768395Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:43:02.625061 containerd[1485]: time="2025-01-13T20:43:02.625036405Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:43:02.625209 containerd[1485]: time="2025-01-13T20:43:02.625188014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:43:02.625251 containerd[1485]: time="2025-01-13T20:43:02.625213123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:43:02.625251 containerd[1485]: time="2025-01-13T20:43:02.625231293Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:43:02.625303 containerd[1485]: time="2025-01-13T20:43:02.625248472Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625303 containerd[1485]: time="2025-01-13T20:43:02.625264460Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625303 containerd[1485]: time="2025-01-13T20:43:02.625290418Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625392 containerd[1485]: time="2025-01-13T20:43:02.625309951Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625392 containerd[1485]: time="2025-01-13T20:43:02.625333484Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625392 containerd[1485]: time="2025-01-13T20:43:02.625350724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625392 containerd[1485]: time="2025-01-13T20:43:02.625372075Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625392 containerd[1485]: time="2025-01-13T20:43:02.625388265Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:43:02.625516 containerd[1485]: time="2025-01-13T20:43:02.625413697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625516 containerd[1485]: time="2025-01-13T20:43:02.625431897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625516 containerd[1485]: time="2025-01-13T20:43:02.625447784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625516 containerd[1485]: time="2025-01-13T20:43:02.625463267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625516 containerd[1485]: time="2025-01-13T20:43:02.625479648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625516 containerd[1485]: time="2025-01-13T20:43:02.625504393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625519755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625535734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625551681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625570548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625584587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625599989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625614442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625633359Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625658236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625683061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625697 containerd[1485]: time="2025-01-13T20:43:02.625697847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625758497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625781404Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625795826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625812693Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625843942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625860758Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625874474Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:43:02.625963 containerd[1485]: time="2025-01-13T20:43:02.625888785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:43:02.626362 containerd[1485]: time="2025-01-13T20:43:02.626298094Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:43:02.627523 containerd[1485]: time="2025-01-13T20:43:02.626552520Z" level=info msg="Connect containerd service" Jan 13 20:43:02.627523 containerd[1485]: time="2025-01-13T20:43:02.626599172Z" level=info msg="using legacy CRI server" Jan 13 20:43:02.627523 containerd[1485]: time="2025-01-13T20:43:02.626608767Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:43:02.627523 containerd[1485]: time="2025-01-13T20:43:02.626733803Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:43:02.627523 containerd[1485]: time="2025-01-13T20:43:02.627433312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:43:02.627727 containerd[1485]: time="2025-01-13T20:43:02.627565308Z" level=info msg="Start subscribing containerd event" Jan 13 20:43:02.627727 containerd[1485]: time="2025-01-13T20:43:02.627613736Z" level=info msg="Start recovering state" Jan 13 20:43:02.627727 containerd[1485]: time="2025-01-13T20:43:02.627678174Z" level=info msg="Start event monitor" Jan 13 20:43:02.627727 containerd[1485]: time="2025-01-13T20:43:02.627699808Z" level=info msg="Start snapshots syncer" Jan 13 20:43:02.627727 containerd[1485]: time="2025-01-13T20:43:02.627711180Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:43:02.627727 containerd[1485]: time="2025-01-13T20:43:02.627720270Z" level=info msg="Start streaming server" Jan 13 20:43:02.627839 containerd[1485]: time="2025-01-13T20:43:02.627812270Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:43:02.627912 containerd[1485]: time="2025-01-13T20:43:02.627876556Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:43:02.628015 containerd[1485]: time="2025-01-13T20:43:02.627964274Z" level=info msg="containerd successfully booted in 0.201544s" Jan 13 20:43:02.628106 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:43:02.629802 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:43:02.630050 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:43:02.637410 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:43:02.650624 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:43:02.667183 tar[1480]: linux-amd64/LICENSE Jan 13 20:43:02.667286 tar[1480]: linux-amd64/README.md Jan 13 20:43:02.669712 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:43:02.672616 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:43:02.673971 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:43:02.689066 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:43:03.195140 systemd-networkd[1423]: eth0: Gained IPv6LL Jan 13 20:43:03.198572 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:43:03.200473 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:43:03.217422 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:43:03.220418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:03.223197 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:43:03.244385 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:43:03.244699 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:43:03.246754 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:43:03.249648 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:43:03.908617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:03.910707 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:43:03.913396 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:43:03.915104 systemd[1]: Startup finished in 789ms (kernel) + 6.338s (initrd) + 4.917s (userspace) = 12.045s. Jan 13 20:43:03.923667 agetty[1546]: failed to open credentials directory Jan 13 20:43:03.933786 agetty[1545]: failed to open credentials directory Jan 13 20:43:04.406128 kubelet[1572]: E0113 20:43:04.405938 1572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:43:04.410740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:43:04.410983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:43:04.411365 systemd[1]: kubelet.service: Consumed 1.043s CPU time. Jan 13 20:43:11.468681 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:43:11.469935 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:37180.service - OpenSSH per-connection server daemon (10.0.0.1:37180). Jan 13 20:43:11.522479 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 37180 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:43:11.524339 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:11.533662 systemd-logind[1473]: New session 1 of user core. Jan 13 20:43:11.535001 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:43:11.551246 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:43:11.562465 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:43:11.565298 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:43:11.573751 (systemd)[1590]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:43:11.690785 systemd[1590]: Queued start job for default target default.target. Jan 13 20:43:11.699303 systemd[1590]: Created slice app.slice - User Application Slice. Jan 13 20:43:11.699329 systemd[1590]: Reached target paths.target - Paths. Jan 13 20:43:11.699344 systemd[1590]: Reached target timers.target - Timers. Jan 13 20:43:11.701001 systemd[1590]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:43:11.713556 systemd[1590]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:43:11.713687 systemd[1590]: Reached target sockets.target - Sockets. Jan 13 20:43:11.713702 systemd[1590]: Reached target basic.target - Basic System. Jan 13 20:43:11.713740 systemd[1590]: Reached target default.target - Main User Target. Jan 13 20:43:11.713772 systemd[1590]: Startup finished in 132ms. Jan 13 20:43:11.714327 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:43:11.716068 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:43:11.776519 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Jan 13 20:43:11.820926 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:43:11.822575 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:11.826368 systemd-logind[1473]: New session 2 of user core. Jan 13 20:43:11.836158 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:43:11.889944 sshd[1603]: Connection closed by 10.0.0.1 port 37182 Jan 13 20:43:11.890295 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:11.901136 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:37182.service: Deactivated successfully. Jan 13 20:43:11.902972 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:43:11.904396 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:43:11.918339 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:37196.service - OpenSSH per-connection server daemon (10.0.0.1:37196). Jan 13 20:43:11.919358 systemd-logind[1473]: Removed session 2. Jan 13 20:43:11.954082 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 37196 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:43:11.955435 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:11.959300 systemd-logind[1473]: New session 3 of user core. Jan 13 20:43:11.970144 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:43:12.019064 sshd[1610]: Connection closed by 10.0.0.1 port 37196 Jan 13 20:43:12.019381 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:12.033893 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:37196.service: Deactivated successfully. Jan 13 20:43:12.035932 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:43:12.037634 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:43:12.049272 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:37202.service - OpenSSH per-connection server daemon (10.0.0.1:37202). Jan 13 20:43:12.050278 systemd-logind[1473]: Removed session 3. Jan 13 20:43:12.085516 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 37202 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:43:12.086933 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:12.090977 systemd-logind[1473]: New session 4 of user core. Jan 13 20:43:12.102137 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:43:12.155712 sshd[1617]: Connection closed by 10.0.0.1 port 37202 Jan 13 20:43:12.156121 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:12.169142 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:37202.service: Deactivated successfully. Jan 13 20:43:12.170973 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:43:12.172749 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:43:12.180307 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:37214.service - OpenSSH per-connection server daemon (10.0.0.1:37214). Jan 13 20:43:12.181264 systemd-logind[1473]: Removed session 4. Jan 13 20:43:12.215569 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 37214 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:43:12.217261 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:12.221381 systemd-logind[1473]: New session 5 of user core. Jan 13 20:43:12.231143 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:43:12.289531 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:43:12.289876 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:43:12.315400 sudo[1625]: pam_unix(sudo:session): session closed for user root Jan 13 20:43:12.317060 sshd[1624]: Connection closed by 10.0.0.1 port 37214 Jan 13 20:43:12.317481 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:12.330785 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:37214.service: Deactivated successfully. Jan 13 20:43:12.332346 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:43:12.334239 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:43:12.335261 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:37230.service - OpenSSH per-connection server daemon (10.0.0.1:37230). Jan 13 20:43:12.335973 systemd-logind[1473]: Removed session 5. Jan 13 20:43:12.376070 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 37230 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:43:12.377672 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:12.381722 systemd-logind[1473]: New session 6 of user core. Jan 13 20:43:12.395198 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:43:12.450483 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:43:12.450904 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:43:12.454962 sudo[1634]: pam_unix(sudo:session): session closed for user root Jan 13 20:43:12.462930 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:43:12.463411 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:43:12.483305 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:43:12.517715 augenrules[1656]: No rules Jan 13 20:43:12.519866 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:43:12.520191 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:43:12.521506 sudo[1633]: pam_unix(sudo:session): session closed for user root Jan 13 20:43:12.522989 sshd[1632]: Connection closed by 10.0.0.1 port 37230 Jan 13 20:43:12.523326 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:12.536947 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:37230.service: Deactivated successfully. Jan 13 20:43:12.538873 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:43:12.540630 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:43:12.548249 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:37234.service - OpenSSH per-connection server daemon (10.0.0.1:37234). Jan 13 20:43:12.549094 systemd-logind[1473]: Removed session 6. Jan 13 20:43:12.586123 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 37234 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:43:12.587465 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:43:12.591288 systemd-logind[1473]: New session 7 of user core. Jan 13 20:43:12.601126 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:43:12.654276 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:43:12.654630 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:43:12.928267 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:43:12.928394 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:43:13.189395 dockerd[1687]: time="2025-01-13T20:43:13.189245634Z" level=info msg="Starting up" Jan 13 20:43:13.310046 dockerd[1687]: time="2025-01-13T20:43:13.309984523Z" level=info msg="Loading containers: start." Jan 13 20:43:13.492077 kernel: Initializing XFRM netlink socket Jan 13 20:43:13.575962 systemd-networkd[1423]: docker0: Link UP Jan 13 20:43:13.633654 dockerd[1687]: time="2025-01-13T20:43:13.633595560Z" level=info msg="Loading containers: done." Jan 13 20:43:13.648820 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3982452826-merged.mount: Deactivated successfully. Jan 13 20:43:13.651930 dockerd[1687]: time="2025-01-13T20:43:13.651886766Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:43:13.652004 dockerd[1687]: time="2025-01-13T20:43:13.651988998Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:43:13.652180 dockerd[1687]: time="2025-01-13T20:43:13.652151634Z" level=info msg="Daemon has completed initialization" Jan 13 20:43:13.691430 dockerd[1687]: time="2025-01-13T20:43:13.691360506Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:43:13.691611 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:43:14.478846 containerd[1485]: time="2025-01-13T20:43:14.478432582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:43:14.661392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:43:14.674219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:14.820338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:14.824591 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:43:14.885618 kubelet[1901]: E0113 20:43:14.885488 1901 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:43:14.893781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:43:14.894084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:43:15.371083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177652714.mount: Deactivated successfully. Jan 13 20:43:17.072604 containerd[1485]: time="2025-01-13T20:43:17.072548975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:17.073896 containerd[1485]: time="2025-01-13T20:43:17.073869801Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:43:17.075307 containerd[1485]: time="2025-01-13T20:43:17.075266631Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:17.079517 containerd[1485]: time="2025-01-13T20:43:17.079474745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:17.080648 containerd[1485]: time="2025-01-13T20:43:17.080617733Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.602127976s" Jan 13 20:43:17.080685 containerd[1485]: time="2025-01-13T20:43:17.080653559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:43:17.100375 containerd[1485]: time="2025-01-13T20:43:17.100341356Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:43:18.889856 containerd[1485]: time="2025-01-13T20:43:18.889777863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:18.890715 containerd[1485]: time="2025-01-13T20:43:18.890661071Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:43:18.892126 containerd[1485]: time="2025-01-13T20:43:18.892087214Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:18.895209 containerd[1485]: time="2025-01-13T20:43:18.895158640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:18.896343 containerd[1485]: time="2025-01-13T20:43:18.896293590Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.795898387s" Jan 13 20:43:18.896343 containerd[1485]: time="2025-01-13T20:43:18.896338286Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:43:18.919136 containerd[1485]: time="2025-01-13T20:43:18.919097794Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:43:19.846496 containerd[1485]: time="2025-01-13T20:43:19.846430598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:19.847493 containerd[1485]: time="2025-01-13T20:43:19.847409108Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:43:19.848574 containerd[1485]: time="2025-01-13T20:43:19.848527679Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:19.851878 containerd[1485]: time="2025-01-13T20:43:19.851806241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:19.854292 containerd[1485]: time="2025-01-13T20:43:19.853488829Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 934.202855ms" Jan 13 20:43:19.854292 containerd[1485]: time="2025-01-13T20:43:19.853891881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:43:19.880227 containerd[1485]: time="2025-01-13T20:43:19.880174423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:43:20.995356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475121396.mount: Deactivated successfully. Jan 13 20:43:21.571482 containerd[1485]: time="2025-01-13T20:43:21.571417075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:21.572707 containerd[1485]: time="2025-01-13T20:43:21.572617452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:43:21.573943 containerd[1485]: time="2025-01-13T20:43:21.573903114Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:21.576243 containerd[1485]: time="2025-01-13T20:43:21.576197151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:21.577004 containerd[1485]: time="2025-01-13T20:43:21.576961342Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.696743022s" Jan 13 20:43:21.577004 containerd[1485]: time="2025-01-13T20:43:21.576994024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:43:21.600047 containerd[1485]: time="2025-01-13T20:43:21.599993247Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:43:22.119541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535527038.mount: Deactivated successfully. Jan 13 20:43:22.923331 containerd[1485]: time="2025-01-13T20:43:22.923251293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:22.925247 containerd[1485]: time="2025-01-13T20:43:22.925197118Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:43:22.926247 containerd[1485]: time="2025-01-13T20:43:22.926208941Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:22.929460 containerd[1485]: time="2025-01-13T20:43:22.929393903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:22.930567 containerd[1485]: time="2025-01-13T20:43:22.930524616Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.33048036s" Jan 13 20:43:22.930567 containerd[1485]: time="2025-01-13T20:43:22.930561946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:43:22.954360 containerd[1485]: time="2025-01-13T20:43:22.954284926Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:43:23.445431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367202229.mount: Deactivated successfully. Jan 13 20:43:23.452304 containerd[1485]: time="2025-01-13T20:43:23.452251419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:23.453160 containerd[1485]: time="2025-01-13T20:43:23.453118358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:43:23.454350 containerd[1485]: time="2025-01-13T20:43:23.454311538Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:23.456796 containerd[1485]: time="2025-01-13T20:43:23.456731309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:23.457756 containerd[1485]: time="2025-01-13T20:43:23.457724658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 503.385429ms" Jan 13 20:43:23.457803 containerd[1485]: time="2025-01-13T20:43:23.457757996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:43:23.481242 containerd[1485]: time="2025-01-13T20:43:23.481198331Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:43:23.990744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657803569.mount: Deactivated successfully. Jan 13 20:43:25.144315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:43:25.158374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:25.301824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:25.307444 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:43:25.365573 kubelet[2117]: E0113 20:43:25.365433 2117 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:43:25.369754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:43:25.369931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:43:26.290312 containerd[1485]: time="2025-01-13T20:43:26.290217788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:26.291089 containerd[1485]: time="2025-01-13T20:43:26.291054711Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:43:26.293284 containerd[1485]: time="2025-01-13T20:43:26.293236046Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:26.296661 containerd[1485]: time="2025-01-13T20:43:26.296605689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:26.297790 containerd[1485]: time="2025-01-13T20:43:26.297746863Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.816511955s" Jan 13 20:43:26.297790 containerd[1485]: time="2025-01-13T20:43:26.297781859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:43:28.259487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:28.271308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:28.291069 systemd[1]: Reloading requested from client PID 2208 ('systemctl') (unit session-7.scope)... Jan 13 20:43:28.291085 systemd[1]: Reloading... Jan 13 20:43:28.373143 zram_generator::config[2247]: No configuration found. Jan 13 20:43:28.683867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:43:28.763938 systemd[1]: Reloading finished in 472 ms. Jan 13 20:43:28.820893 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:28.824621 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:43:28.824883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:28.835747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:28.988069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:28.993502 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:43:29.042407 kubelet[2297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:29.042407 kubelet[2297]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:43:29.042407 kubelet[2297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:29.042821 kubelet[2297]: I0113 20:43:29.042486 2297 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:43:29.373265 kubelet[2297]: I0113 20:43:29.373214 2297 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:43:29.373265 kubelet[2297]: I0113 20:43:29.373251 2297 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:43:29.373505 kubelet[2297]: I0113 20:43:29.373481 2297 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:43:29.389887 kubelet[2297]: E0113 20:43:29.389832 2297 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.390443 kubelet[2297]: I0113 20:43:29.390399 2297 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:43:29.402449 kubelet[2297]: I0113 20:43:29.402402 2297 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:43:29.403583 kubelet[2297]: I0113 20:43:29.403548 2297 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:43:29.403805 kubelet[2297]: I0113 20:43:29.403713 2297 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:43:29.403904 kubelet[2297]: I0113 20:43:29.403816 2297 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:43:29.403904 kubelet[2297]: I0113 20:43:29.403827 2297 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:43:29.406039 kubelet[2297]: I0113 20:43:29.404044 2297 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:29.406039 kubelet[2297]: I0113 20:43:29.404160 2297 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:43:29.406039 kubelet[2297]: I0113 20:43:29.404178 2297 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:43:29.406039 kubelet[2297]: I0113 20:43:29.404206 2297 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:43:29.406039 kubelet[2297]: I0113 20:43:29.404223 2297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:43:29.406039 kubelet[2297]: W0113 20:43:29.405427 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.406039 kubelet[2297]: E0113 20:43:29.405467 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.406039 kubelet[2297]: W0113 20:43:29.405825 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.406039 kubelet[2297]: E0113 20:43:29.405881 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.406379 kubelet[2297]: I0113 20:43:29.406358 2297 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:43:29.409149 kubelet[2297]: I0113 20:43:29.409126 2297 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:43:29.410065 kubelet[2297]: W0113 20:43:29.410045 2297 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:43:29.411868 kubelet[2297]: I0113 20:43:29.410617 2297 server.go:1256] "Started kubelet" Jan 13 20:43:29.411868 kubelet[2297]: I0113 20:43:29.410900 2297 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:43:29.411868 kubelet[2297]: I0113 20:43:29.411211 2297 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:43:29.411868 kubelet[2297]: I0113 20:43:29.411252 2297 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:43:29.411868 kubelet[2297]: I0113 20:43:29.411723 2297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:43:29.412002 kubelet[2297]: I0113 20:43:29.411986 2297 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:43:29.413977 kubelet[2297]: E0113 20:43:29.413313 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:29.413977 kubelet[2297]: I0113 20:43:29.413351 2297 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:43:29.413977 kubelet[2297]: I0113 20:43:29.413435 2297 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:43:29.413977 kubelet[2297]: I0113 20:43:29.413511 2297 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:43:29.413977 kubelet[2297]: W0113 20:43:29.413801 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.413977 kubelet[2297]: E0113 20:43:29.413836 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.415485 kubelet[2297]: I0113 20:43:29.415452 2297 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:43:29.415596 kubelet[2297]: I0113 20:43:29.415532 2297 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:43:29.415960 kubelet[2297]: E0113 20:43:29.415928 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" Jan 13 20:43:29.416097 kubelet[2297]: E0113 20:43:29.416079 2297 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:43:29.417032 kubelet[2297]: E0113 20:43:29.416980 2297 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5b50eb755a29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:43:29.410595369 +0000 UTC m=+0.412902347,LastTimestamp:2025-01-13 20:43:29.410595369 +0000 UTC m=+0.412902347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:43:29.417444 kubelet[2297]: I0113 20:43:29.417429 2297 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:43:29.428481 kubelet[2297]: I0113 20:43:29.428436 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:43:29.429900 kubelet[2297]: I0113 20:43:29.429875 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:43:29.429936 kubelet[2297]: I0113 20:43:29.429911 2297 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:43:29.429936 kubelet[2297]: I0113 20:43:29.429932 2297 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:43:29.430000 kubelet[2297]: E0113 20:43:29.429979 2297 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:43:29.433247 kubelet[2297]: I0113 20:43:29.433213 2297 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:43:29.433792 kubelet[2297]: I0113 20:43:29.433357 2297 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:43:29.433792 kubelet[2297]: I0113 20:43:29.433379 2297 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:29.433792 kubelet[2297]: W0113 20:43:29.433674 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.433792 kubelet[2297]: E0113 20:43:29.433722 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:29.515251 kubelet[2297]: I0113 20:43:29.515204 2297 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:43:29.515588 kubelet[2297]: E0113 20:43:29.515561 2297 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 20:43:29.530838 kubelet[2297]: E0113 20:43:29.530789 2297 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:43:29.616851 kubelet[2297]: E0113 20:43:29.616802 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" Jan 13 20:43:29.673668 kubelet[2297]: I0113 20:43:29.673526 2297 policy_none.go:49] "None policy: Start" Jan 13 20:43:29.674413 kubelet[2297]: I0113 20:43:29.674388 2297 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:43:29.674600 kubelet[2297]: I0113 20:43:29.674419 2297 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:43:29.681258 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:43:29.696264 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:43:29.699588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:43:29.714821 kubelet[2297]: I0113 20:43:29.714801 2297 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:43:29.715490 kubelet[2297]: I0113 20:43:29.715073 2297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:43:29.716004 kubelet[2297]: E0113 20:43:29.715984 2297 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:43:29.716705 kubelet[2297]: I0113 20:43:29.716663 2297 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:43:29.717058 kubelet[2297]: E0113 20:43:29.717042 2297 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 20:43:29.731107 kubelet[2297]: I0113 20:43:29.731072 2297 topology_manager.go:215] "Topology Admit Handler" podUID="42b8c7196f83f31147af5a301fc913c4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:43:29.733251 kubelet[2297]: I0113 20:43:29.733227 2297 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:43:29.733969 kubelet[2297]: I0113 20:43:29.733953 2297 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:43:29.741331 systemd[1]: Created slice kubepods-burstable-pod42b8c7196f83f31147af5a301fc913c4.slice - libcontainer container kubepods-burstable-pod42b8c7196f83f31147af5a301fc913c4.slice. Jan 13 20:43:29.753512 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 20:43:29.757378 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 20:43:29.815246 kubelet[2297]: I0113 20:43:29.815176 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b8c7196f83f31147af5a301fc913c4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42b8c7196f83f31147af5a301fc913c4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:43:29.815246 kubelet[2297]: I0113 20:43:29.815245 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:29.815437 kubelet[2297]: I0113 20:43:29.815280 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:29.815437 kubelet[2297]: I0113 20:43:29.815309 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:29.815437 kubelet[2297]: I0113 20:43:29.815335 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:29.815437 kubelet[2297]: I0113 20:43:29.815383 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:43:29.815437 kubelet[2297]: I0113 20:43:29.815430 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b8c7196f83f31147af5a301fc913c4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42b8c7196f83f31147af5a301fc913c4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:43:29.815568 kubelet[2297]: I0113 20:43:29.815482 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b8c7196f83f31147af5a301fc913c4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42b8c7196f83f31147af5a301fc913c4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:43:29.815568 kubelet[2297]: I0113 20:43:29.815550 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:30.017900 kubelet[2297]: E0113 20:43:30.017759 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" Jan 13 20:43:30.051149 kubelet[2297]: E0113 20:43:30.051089 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:30.051872 containerd[1485]: time="2025-01-13T20:43:30.051834067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42b8c7196f83f31147af5a301fc913c4,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:30.056131 kubelet[2297]: E0113 20:43:30.056092 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:30.056615 containerd[1485]: time="2025-01-13T20:43:30.056567311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:30.059803 kubelet[2297]: E0113 20:43:30.059749 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:30.060183 containerd[1485]: time="2025-01-13T20:43:30.060147111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:30.118846 kubelet[2297]: I0113 20:43:30.118807 2297 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:43:30.119292 kubelet[2297]: E0113 20:43:30.119259 2297 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 20:43:30.228375 kubelet[2297]: W0113 20:43:30.228284 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.228375 kubelet[2297]: E0113 20:43:30.228360 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.587071 kubelet[2297]: W0113 20:43:30.586988 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.587071 kubelet[2297]: E0113 20:43:30.587063 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.604360 kubelet[2297]: W0113 20:43:30.604297 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.604360 kubelet[2297]: E0113 20:43:30.604344 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.638713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852644083.mount: Deactivated successfully. Jan 13 20:43:30.647544 containerd[1485]: time="2025-01-13T20:43:30.647507528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:30.650090 containerd[1485]: time="2025-01-13T20:43:30.650049603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:43:30.650997 containerd[1485]: time="2025-01-13T20:43:30.650970777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:30.652919 containerd[1485]: time="2025-01-13T20:43:30.652885716Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:30.653669 containerd[1485]: time="2025-01-13T20:43:30.653610674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:43:30.654707 containerd[1485]: time="2025-01-13T20:43:30.654654131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:30.655599 containerd[1485]: time="2025-01-13T20:43:30.655569874Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:43:30.656628 containerd[1485]: time="2025-01-13T20:43:30.656595024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:30.657412 containerd[1485]: time="2025-01-13T20:43:30.657379265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.700963ms" Jan 13 20:43:30.660162 containerd[1485]: time="2025-01-13T20:43:30.660136684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.896781ms" Jan 13 20:43:30.662175 containerd[1485]: time="2025-01-13T20:43:30.662146058Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.215312ms" Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776570858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776619820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776638177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776725117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776557561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776604548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776614258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:30.776824 containerd[1485]: time="2025-01-13T20:43:30.776689523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:30.778754 containerd[1485]: time="2025-01-13T20:43:30.778383725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:30.778754 containerd[1485]: time="2025-01-13T20:43:30.778429089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:30.778754 containerd[1485]: time="2025-01-13T20:43:30.778448790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:30.778754 containerd[1485]: time="2025-01-13T20:43:30.778523144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:30.806208 systemd[1]: Started cri-containerd-0fd0f8be3042cce3439d5dc14d01654c8aec00a31e177b8188b7724a98f46c09.scope - libcontainer container 0fd0f8be3042cce3439d5dc14d01654c8aec00a31e177b8188b7724a98f46c09. Jan 13 20:43:30.807981 systemd[1]: Started cri-containerd-abf9da490e6661e03fb811317cdce7fe2eace51e095da2a66c623ea9c1ea7d5f.scope - libcontainer container abf9da490e6661e03fb811317cdce7fe2eace51e095da2a66c623ea9c1ea7d5f. Jan 13 20:43:30.810113 systemd[1]: Started cri-containerd-ef94fc8f67031dae1e79e861dc058fc5e575032f24c91a56ba43d653038e1b5a.scope - libcontainer container ef94fc8f67031dae1e79e861dc058fc5e575032f24c91a56ba43d653038e1b5a. Jan 13 20:43:30.819706 kubelet[2297]: E0113 20:43:30.819675 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="1.6s" Jan 13 20:43:30.852508 containerd[1485]: time="2025-01-13T20:43:30.852085687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fd0f8be3042cce3439d5dc14d01654c8aec00a31e177b8188b7724a98f46c09\"" Jan 13 20:43:30.853298 containerd[1485]: time="2025-01-13T20:43:30.853090744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef94fc8f67031dae1e79e861dc058fc5e575032f24c91a56ba43d653038e1b5a\"" Jan 13 20:43:30.854418 kubelet[2297]: E0113 20:43:30.854400 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:30.855776 kubelet[2297]: E0113 20:43:30.855458 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:30.858460 containerd[1485]: time="2025-01-13T20:43:30.858267085Z" level=info msg="CreateContainer within sandbox \"ef94fc8f67031dae1e79e861dc058fc5e575032f24c91a56ba43d653038e1b5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:43:30.859425 containerd[1485]: time="2025-01-13T20:43:30.859406740Z" level=info msg="CreateContainer within sandbox \"0fd0f8be3042cce3439d5dc14d01654c8aec00a31e177b8188b7724a98f46c09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:43:30.860594 containerd[1485]: time="2025-01-13T20:43:30.860563662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42b8c7196f83f31147af5a301fc913c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"abf9da490e6661e03fb811317cdce7fe2eace51e095da2a66c623ea9c1ea7d5f\"" Jan 13 20:43:30.861185 kubelet[2297]: E0113 20:43:30.861159 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:30.863054 containerd[1485]: time="2025-01-13T20:43:30.863030131Z" level=info msg="CreateContainer within sandbox \"abf9da490e6661e03fb811317cdce7fe2eace51e095da2a66c623ea9c1ea7d5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:43:30.880845 containerd[1485]: time="2025-01-13T20:43:30.880804502Z" level=info msg="CreateContainer within sandbox \"0fd0f8be3042cce3439d5dc14d01654c8aec00a31e177b8188b7724a98f46c09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c0b70db0bbcadf904f8a97aae71bb868383ee19a5439d8900ddb30fcb235546\"" Jan 13 20:43:30.881524 containerd[1485]: time="2025-01-13T20:43:30.881455237Z" level=info msg="StartContainer for \"2c0b70db0bbcadf904f8a97aae71bb868383ee19a5439d8900ddb30fcb235546\"" Jan 13 20:43:30.891495 containerd[1485]: time="2025-01-13T20:43:30.891448906Z" level=info msg="CreateContainer within sandbox \"abf9da490e6661e03fb811317cdce7fe2eace51e095da2a66c623ea9c1ea7d5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"389d33ac22b32cd334db2042b01787ee7dbec7dca1354dca606b396ab481dfdc\"" Jan 13 20:43:30.891896 containerd[1485]: time="2025-01-13T20:43:30.891868303Z" level=info msg="StartContainer for \"389d33ac22b32cd334db2042b01787ee7dbec7dca1354dca606b396ab481dfdc\"" Jan 13 20:43:30.893208 containerd[1485]: time="2025-01-13T20:43:30.893184293Z" level=info msg="CreateContainer within sandbox \"ef94fc8f67031dae1e79e861dc058fc5e575032f24c91a56ba43d653038e1b5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6a3df5d0237f58da8ce71a8cd5c60f84d97d53218d8adba7cb6a3ae502a12cc\"" Jan 13 20:43:30.893573 containerd[1485]: time="2025-01-13T20:43:30.893549287Z" level=info msg="StartContainer for \"b6a3df5d0237f58da8ce71a8cd5c60f84d97d53218d8adba7cb6a3ae502a12cc\"" Jan 13 20:43:30.912327 systemd[1]: Started cri-containerd-2c0b70db0bbcadf904f8a97aae71bb868383ee19a5439d8900ddb30fcb235546.scope - libcontainer container 2c0b70db0bbcadf904f8a97aae71bb868383ee19a5439d8900ddb30fcb235546. Jan 13 20:43:30.921294 kubelet[2297]: I0113 20:43:30.921252 2297 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:43:30.921535 kubelet[2297]: E0113 20:43:30.921520 2297 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jan 13 20:43:30.922975 kubelet[2297]: W0113 20:43:30.922932 2297 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.923065 kubelet[2297]: E0113 20:43:30.922986 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jan 13 20:43:30.924318 systemd[1]: Started cri-containerd-389d33ac22b32cd334db2042b01787ee7dbec7dca1354dca606b396ab481dfdc.scope - libcontainer container 389d33ac22b32cd334db2042b01787ee7dbec7dca1354dca606b396ab481dfdc. Jan 13 20:43:30.928797 systemd[1]: Started cri-containerd-b6a3df5d0237f58da8ce71a8cd5c60f84d97d53218d8adba7cb6a3ae502a12cc.scope - libcontainer container b6a3df5d0237f58da8ce71a8cd5c60f84d97d53218d8adba7cb6a3ae502a12cc. Jan 13 20:43:30.972879 containerd[1485]: time="2025-01-13T20:43:30.971945241Z" level=info msg="StartContainer for \"2c0b70db0bbcadf904f8a97aae71bb868383ee19a5439d8900ddb30fcb235546\" returns successfully" Jan 13 20:43:30.972879 containerd[1485]: time="2025-01-13T20:43:30.972125033Z" level=info msg="StartContainer for \"389d33ac22b32cd334db2042b01787ee7dbec7dca1354dca606b396ab481dfdc\" returns successfully" Jan 13 20:43:30.978387 containerd[1485]: time="2025-01-13T20:43:30.978330420Z" level=info msg="StartContainer for \"b6a3df5d0237f58da8ce71a8cd5c60f84d97d53218d8adba7cb6a3ae502a12cc\" returns successfully" Jan 13 20:43:31.442576 kubelet[2297]: E0113 20:43:31.441688 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:31.444468 kubelet[2297]: E0113 20:43:31.444445 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:31.446229 kubelet[2297]: E0113 20:43:31.446208 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:32.399432 kubelet[2297]: E0113 20:43:32.399395 2297 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:43:32.422611 kubelet[2297]: E0113 20:43:32.422572 2297 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:43:32.448398 kubelet[2297]: E0113 20:43:32.448368 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:32.525400 kubelet[2297]: I0113 20:43:32.525360 2297 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:43:32.531626 kubelet[2297]: I0113 20:43:32.531582 2297 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:43:32.537504 kubelet[2297]: E0113 20:43:32.537477 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:32.637989 kubelet[2297]: E0113 20:43:32.637929 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:32.738780 kubelet[2297]: E0113 20:43:32.738641 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:32.839368 kubelet[2297]: E0113 20:43:32.839332 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:32.884521 kubelet[2297]: E0113 20:43:32.884494 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:32.939996 kubelet[2297]: E0113 20:43:32.939942 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:33.040857 kubelet[2297]: E0113 20:43:33.040577 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:33.141334 kubelet[2297]: E0113 20:43:33.141279 2297 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:33.407330 kubelet[2297]: I0113 20:43:33.407271 2297 apiserver.go:52] "Watching apiserver" Jan 13 20:43:33.414476 kubelet[2297]: I0113 20:43:33.414419 2297 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:43:34.592756 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-7.scope)... Jan 13 20:43:34.592773 systemd[1]: Reloading... Jan 13 20:43:34.666894 zram_generator::config[2618]: No configuration found. Jan 13 20:43:34.784812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:43:34.878720 systemd[1]: Reloading finished in 285 ms. Jan 13 20:43:34.925145 kubelet[2297]: I0113 20:43:34.925003 2297 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:43:34.925188 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:34.948524 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:43:34.948798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:34.954348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:35.104444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:35.109501 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:43:35.163370 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:35.163370 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:43:35.163370 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:35.163370 kubelet[2663]: I0113 20:43:35.162878 2663 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:43:35.168126 kubelet[2663]: I0113 20:43:35.168082 2663 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:43:35.168126 kubelet[2663]: I0113 20:43:35.168124 2663 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:43:35.168440 kubelet[2663]: I0113 20:43:35.168416 2663 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:43:35.170303 kubelet[2663]: I0113 20:43:35.170274 2663 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:43:35.172887 kubelet[2663]: I0113 20:43:35.172849 2663 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:43:35.181492 kubelet[2663]: I0113 20:43:35.181461 2663 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:43:35.184266 kubelet[2663]: I0113 20:43:35.184244 2663 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:43:35.185105 kubelet[2663]: I0113 20:43:35.184812 2663 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:43:35.185105 kubelet[2663]: I0113 20:43:35.185082 2663 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:43:35.185396 kubelet[2663]: I0113 20:43:35.185382 2663 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:43:35.185441 kubelet[2663]: I0113 20:43:35.185429 2663 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:35.185553 kubelet[2663]: I0113 20:43:35.185536 2663 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:43:35.185553 kubelet[2663]: I0113 20:43:35.185554 2663 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:43:35.185615 kubelet[2663]: I0113 20:43:35.185580 2663 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:43:35.185615 kubelet[2663]: I0113 20:43:35.185603 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:43:35.186952 kubelet[2663]: I0113 20:43:35.186810 2663 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:43:35.187043 kubelet[2663]: I0113 20:43:35.186989 2663 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:43:35.190044 kubelet[2663]: I0113 20:43:35.187846 2663 server.go:1256] "Started kubelet" Jan 13 20:43:35.190044 kubelet[2663]: I0113 20:43:35.188106 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:43:35.190044 kubelet[2663]: I0113 20:43:35.188300 2663 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:43:35.190044 kubelet[2663]: I0113 20:43:35.189373 2663 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:43:35.190376 kubelet[2663]: I0113 20:43:35.190360 2663 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:43:35.195248 kubelet[2663]: I0113 20:43:35.195233 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:43:35.200625 kubelet[2663]: E0113 20:43:35.200548 2663 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:43:35.200625 kubelet[2663]: I0113 20:43:35.200596 2663 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:43:35.200762 kubelet[2663]: I0113 20:43:35.200734 2663 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:43:35.200919 kubelet[2663]: I0113 20:43:35.200906 2663 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:43:35.202098 kubelet[2663]: E0113 20:43:35.201184 2663 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:43:35.203738 kubelet[2663]: I0113 20:43:35.202779 2663 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:43:35.203738 kubelet[2663]: I0113 20:43:35.202900 2663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:43:35.204422 kubelet[2663]: I0113 20:43:35.204389 2663 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:43:35.211505 kubelet[2663]: I0113 20:43:35.211477 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:43:35.213642 kubelet[2663]: I0113 20:43:35.213484 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:43:35.213642 kubelet[2663]: I0113 20:43:35.213509 2663 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:43:35.213832 kubelet[2663]: I0113 20:43:35.213785 2663 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:43:35.213868 kubelet[2663]: E0113 20:43:35.213859 2663 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:43:35.244836 kubelet[2663]: I0113 20:43:35.244799 2663 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:43:35.244836 kubelet[2663]: I0113 20:43:35.244818 2663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:43:35.244836 kubelet[2663]: I0113 20:43:35.244833 2663 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:35.245081 kubelet[2663]: I0113 20:43:35.244967 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:43:35.245081 kubelet[2663]: I0113 20:43:35.244989 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:43:35.245081 kubelet[2663]: I0113 20:43:35.244995 2663 policy_none.go:49] "None policy: Start" Jan 13 20:43:35.245607 kubelet[2663]: I0113 20:43:35.245565 2663 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:43:35.245664 kubelet[2663]: I0113 20:43:35.245645 2663 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:43:35.245915 kubelet[2663]: I0113 20:43:35.245888 2663 state_mem.go:75] "Updated machine memory state" Jan 13 20:43:35.250868 kubelet[2663]: I0113 20:43:35.250831 2663 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:43:35.251403 kubelet[2663]: I0113 20:43:35.251229 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:43:35.305636 kubelet[2663]: I0113 20:43:35.305605 2663 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:43:35.311871 kubelet[2663]: I0113 20:43:35.311840 2663 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:43:35.311968 kubelet[2663]: I0113 20:43:35.311914 2663 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:43:35.314182 kubelet[2663]: I0113 20:43:35.314140 2663 topology_manager.go:215] "Topology Admit Handler" podUID="42b8c7196f83f31147af5a301fc913c4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:43:35.314263 kubelet[2663]: I0113 20:43:35.314250 2663 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:43:35.314364 kubelet[2663]: I0113 20:43:35.314348 2663 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:43:35.401451 kubelet[2663]: I0113 20:43:35.401401 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b8c7196f83f31147af5a301fc913c4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42b8c7196f83f31147af5a301fc913c4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:43:35.401451 kubelet[2663]: I0113 20:43:35.401443 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b8c7196f83f31147af5a301fc913c4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42b8c7196f83f31147af5a301fc913c4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:43:35.401628 kubelet[2663]: I0113 20:43:35.401480 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:35.401628 kubelet[2663]: I0113 20:43:35.401510 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:35.401628 kubelet[2663]: I0113 20:43:35.401591 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b8c7196f83f31147af5a301fc913c4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42b8c7196f83f31147af5a301fc913c4\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:43:35.401628 kubelet[2663]: I0113 20:43:35.401624 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:35.401732 kubelet[2663]: I0113 20:43:35.401644 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:35.401732 kubelet[2663]: I0113 20:43:35.401669 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:43:35.401732 kubelet[2663]: I0113 20:43:35.401688 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:43:35.626813 kubelet[2663]: E0113 20:43:35.626767 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:35.628195 kubelet[2663]: E0113 20:43:35.627635 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:35.628195 kubelet[2663]: E0113 20:43:35.627779 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:36.186216 kubelet[2663]: I0113 20:43:36.186171 2663 apiserver.go:52] "Watching apiserver" Jan 13 20:43:36.262992 kubelet[2663]: E0113 20:43:36.262947 2663 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:43:36.262992 kubelet[2663]: E0113 20:43:36.262980 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:36.263211 kubelet[2663]: E0113 20:43:36.263085 2663 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 20:43:36.264406 kubelet[2663]: E0113 20:43:36.263476 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:36.264406 kubelet[2663]: E0113 20:43:36.263848 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:36.287086 kubelet[2663]: I0113 20:43:36.287026 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.286961408 podStartE2EDuration="1.286961408s" podCreationTimestamp="2025-01-13 20:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:36.286783503 +0000 UTC m=+1.171979632" watchObservedRunningTime="2025-01-13 20:43:36.286961408 +0000 UTC m=+1.172157537" Jan 13 20:43:36.303041 kubelet[2663]: I0113 20:43:36.301203 2663 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:43:36.311631 kubelet[2663]: I0113 20:43:36.311585 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.311543304 podStartE2EDuration="1.311543304s" podCreationTimestamp="2025-01-13 20:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:36.301475553 +0000 UTC m=+1.186671683" watchObservedRunningTime="2025-01-13 20:43:36.311543304 +0000 UTC m=+1.196739434" Jan 13 20:43:37.227531 kubelet[2663]: E0113 20:43:37.227501 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:37.227963 kubelet[2663]: E0113 20:43:37.227613 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:37.558374 kubelet[2663]: E0113 20:43:37.558228 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:38.229392 kubelet[2663]: E0113 20:43:38.229368 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:39.351153 sudo[1667]: pam_unix(sudo:session): session closed for user root Jan 13 20:43:39.352930 sshd[1666]: Connection closed by 10.0.0.1 port 37234 Jan 13 20:43:39.353250 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:39.357325 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:37234.service: Deactivated successfully. Jan 13 20:43:39.359393 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:43:39.359613 systemd[1]: session-7.scope: Consumed 4.151s CPU time, 186.4M memory peak, 0B memory swap peak. Jan 13 20:43:39.360142 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:43:39.361157 systemd-logind[1473]: Removed session 7. Jan 13 20:43:39.731875 kubelet[2663]: E0113 20:43:39.731826 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:45.801202 kubelet[2663]: E0113 20:43:45.801171 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:45.808208 kubelet[2663]: I0113 20:43:45.808149 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.808103855 podStartE2EDuration="10.808103855s" podCreationTimestamp="2025-01-13 20:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:36.312204713 +0000 UTC m=+1.197400843" watchObservedRunningTime="2025-01-13 20:43:45.808103855 +0000 UTC m=+10.693299984" Jan 13 20:43:46.877142 update_engine[1474]: I20250113 20:43:46.877059 1474 update_attempter.cc:509] Updating boot flags... Jan 13 20:43:46.905784 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2761) Jan 13 20:43:46.940054 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2760) Jan 13 20:43:46.981048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2760) Jan 13 20:43:47.562002 kubelet[2663]: E0113 20:43:47.561967 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:49.737200 kubelet[2663]: E0113 20:43:49.737088 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:50.159810 kubelet[2663]: I0113 20:43:50.159784 2663 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:43:50.160170 containerd[1485]: time="2025-01-13T20:43:50.160122139Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:43:50.160495 kubelet[2663]: I0113 20:43:50.160329 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:43:50.245805 kubelet[2663]: E0113 20:43:50.245769 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:51.037510 kubelet[2663]: I0113 20:43:51.037472 2663 topology_manager.go:215] "Topology Admit Handler" podUID="2c273e18-2554-480e-a89f-1b9a058167b0" podNamespace="kube-system" podName="kube-proxy-6bdlt" Jan 13 20:43:51.044665 systemd[1]: Created slice kubepods-besteffort-pod2c273e18_2554_480e_a89f_1b9a058167b0.slice - libcontainer container kubepods-besteffort-pod2c273e18_2554_480e_a89f_1b9a058167b0.slice. Jan 13 20:43:51.098753 kubelet[2663]: I0113 20:43:51.098721 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c273e18-2554-480e-a89f-1b9a058167b0-lib-modules\") pod \"kube-proxy-6bdlt\" (UID: \"2c273e18-2554-480e-a89f-1b9a058167b0\") " pod="kube-system/kube-proxy-6bdlt" Jan 13 20:43:51.098753 kubelet[2663]: I0113 20:43:51.098755 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c273e18-2554-480e-a89f-1b9a058167b0-xtables-lock\") pod \"kube-proxy-6bdlt\" (UID: \"2c273e18-2554-480e-a89f-1b9a058167b0\") " pod="kube-system/kube-proxy-6bdlt" Jan 13 20:43:51.098753 kubelet[2663]: I0113 20:43:51.098773 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8g9n\" (UniqueName: \"kubernetes.io/projected/2c273e18-2554-480e-a89f-1b9a058167b0-kube-api-access-s8g9n\") pod \"kube-proxy-6bdlt\" (UID: \"2c273e18-2554-480e-a89f-1b9a058167b0\") " pod="kube-system/kube-proxy-6bdlt" Jan 13 20:43:51.098946 kubelet[2663]: I0113 20:43:51.098792 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c273e18-2554-480e-a89f-1b9a058167b0-kube-proxy\") pod \"kube-proxy-6bdlt\" (UID: \"2c273e18-2554-480e-a89f-1b9a058167b0\") " pod="kube-system/kube-proxy-6bdlt" Jan 13 20:43:51.276587 kubelet[2663]: I0113 20:43:51.276473 2663 topology_manager.go:215] "Topology Admit Handler" podUID="75f9344f-cf21-44f9-b503-129b7802df4b" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-27m4p" Jan 13 20:43:51.282677 systemd[1]: Created slice kubepods-besteffort-pod75f9344f_cf21_44f9_b503_129b7802df4b.slice - libcontainer container kubepods-besteffort-pod75f9344f_cf21_44f9_b503_129b7802df4b.slice. Jan 13 20:43:51.300500 kubelet[2663]: I0113 20:43:51.300347 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75f9344f-cf21-44f9-b503-129b7802df4b-var-lib-calico\") pod \"tigera-operator-c7ccbd65-27m4p\" (UID: \"75f9344f-cf21-44f9-b503-129b7802df4b\") " pod="tigera-operator/tigera-operator-c7ccbd65-27m4p" Jan 13 20:43:51.300500 kubelet[2663]: I0113 20:43:51.300381 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69cj\" (UniqueName: \"kubernetes.io/projected/75f9344f-cf21-44f9-b503-129b7802df4b-kube-api-access-w69cj\") pod \"tigera-operator-c7ccbd65-27m4p\" (UID: \"75f9344f-cf21-44f9-b503-129b7802df4b\") " pod="tigera-operator/tigera-operator-c7ccbd65-27m4p" Jan 13 20:43:51.351560 kubelet[2663]: E0113 20:43:51.351534 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:51.352031 containerd[1485]: time="2025-01-13T20:43:51.351970222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bdlt,Uid:2c273e18-2554-480e-a89f-1b9a058167b0,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:51.374459 containerd[1485]: time="2025-01-13T20:43:51.374367232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:51.374459 containerd[1485]: time="2025-01-13T20:43:51.374417788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:51.374459 containerd[1485]: time="2025-01-13T20:43:51.374430193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:51.374620 containerd[1485]: time="2025-01-13T20:43:51.374508974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:51.402177 systemd[1]: Started cri-containerd-a16f1f23aaf1dfdfacef4e80aef4733ece9b1bba3424078060889deeb6297698.scope - libcontainer container a16f1f23aaf1dfdfacef4e80aef4733ece9b1bba3424078060889deeb6297698. Jan 13 20:43:51.424178 containerd[1485]: time="2025-01-13T20:43:51.424139703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bdlt,Uid:2c273e18-2554-480e-a89f-1b9a058167b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a16f1f23aaf1dfdfacef4e80aef4733ece9b1bba3424078060889deeb6297698\"" Jan 13 20:43:51.424765 kubelet[2663]: E0113 20:43:51.424744 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:51.426626 containerd[1485]: time="2025-01-13T20:43:51.426592635Z" level=info msg="CreateContainer within sandbox \"a16f1f23aaf1dfdfacef4e80aef4733ece9b1bba3424078060889deeb6297698\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:43:51.442450 containerd[1485]: time="2025-01-13T20:43:51.442409350Z" level=info msg="CreateContainer within sandbox \"a16f1f23aaf1dfdfacef4e80aef4733ece9b1bba3424078060889deeb6297698\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12925da5fe650a76da584eca5a15c28141a64162a084e49c87c13533701036ed\"" Jan 13 20:43:51.443122 containerd[1485]: time="2025-01-13T20:43:51.442923150Z" level=info msg="StartContainer for \"12925da5fe650a76da584eca5a15c28141a64162a084e49c87c13533701036ed\"" Jan 13 20:43:51.471148 systemd[1]: Started cri-containerd-12925da5fe650a76da584eca5a15c28141a64162a084e49c87c13533701036ed.scope - libcontainer container 12925da5fe650a76da584eca5a15c28141a64162a084e49c87c13533701036ed. Jan 13 20:43:51.502297 containerd[1485]: time="2025-01-13T20:43:51.502226209Z" level=info msg="StartContainer for \"12925da5fe650a76da584eca5a15c28141a64162a084e49c87c13533701036ed\" returns successfully" Jan 13 20:43:51.586170 containerd[1485]: time="2025-01-13T20:43:51.586052668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-27m4p,Uid:75f9344f-cf21-44f9-b503-129b7802df4b,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:43:51.610106 containerd[1485]: time="2025-01-13T20:43:51.609934952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:51.610106 containerd[1485]: time="2025-01-13T20:43:51.610028072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:51.610106 containerd[1485]: time="2025-01-13T20:43:51.610047329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:51.610318 containerd[1485]: time="2025-01-13T20:43:51.610134557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:51.631141 systemd[1]: Started cri-containerd-6c262791b70e8929ad91cf53b4cd3ebcc4e553b349132700f6eebae84505b2c1.scope - libcontainer container 6c262791b70e8929ad91cf53b4cd3ebcc4e553b349132700f6eebae84505b2c1. Jan 13 20:43:51.664332 containerd[1485]: time="2025-01-13T20:43:51.664287109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-27m4p,Uid:75f9344f-cf21-44f9-b503-129b7802df4b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6c262791b70e8929ad91cf53b4cd3ebcc4e553b349132700f6eebae84505b2c1\"" Jan 13 20:43:51.666314 containerd[1485]: time="2025-01-13T20:43:51.665802292Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:43:52.252217 kubelet[2663]: E0113 20:43:52.252182 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:52.259845 kubelet[2663]: I0113 20:43:52.259800 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6bdlt" podStartSLOduration=1.259758453 podStartE2EDuration="1.259758453s" podCreationTimestamp="2025-01-13 20:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:52.25915485 +0000 UTC m=+17.144350989" watchObservedRunningTime="2025-01-13 20:43:52.259758453 +0000 UTC m=+17.144954582" Jan 13 20:43:53.455290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811486882.mount: Deactivated successfully. Jan 13 20:43:53.834378 containerd[1485]: time="2025-01-13T20:43:53.834256258Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:53.835132 containerd[1485]: time="2025-01-13T20:43:53.835066148Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764313" Jan 13 20:43:53.836210 containerd[1485]: time="2025-01-13T20:43:53.836178428Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:53.838850 containerd[1485]: time="2025-01-13T20:43:53.838805744Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:53.839593 containerd[1485]: time="2025-01-13T20:43:53.839562941Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.173735301s" Jan 13 20:43:53.839634 containerd[1485]: time="2025-01-13T20:43:53.839594723Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 20:43:53.841218 containerd[1485]: time="2025-01-13T20:43:53.841183570Z" level=info msg="CreateContainer within sandbox \"6c262791b70e8929ad91cf53b4cd3ebcc4e553b349132700f6eebae84505b2c1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:43:53.854188 containerd[1485]: time="2025-01-13T20:43:53.854151155Z" level=info msg="CreateContainer within sandbox \"6c262791b70e8929ad91cf53b4cd3ebcc4e553b349132700f6eebae84505b2c1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c82a5c605e7dd5356fb19040cacefcc3e345521f5c605fab0915252005d7cde6\"" Jan 13 20:43:53.854545 containerd[1485]: time="2025-01-13T20:43:53.854523331Z" level=info msg="StartContainer for \"c82a5c605e7dd5356fb19040cacefcc3e345521f5c605fab0915252005d7cde6\"" Jan 13 20:43:53.884172 systemd[1]: Started cri-containerd-c82a5c605e7dd5356fb19040cacefcc3e345521f5c605fab0915252005d7cde6.scope - libcontainer container c82a5c605e7dd5356fb19040cacefcc3e345521f5c605fab0915252005d7cde6. Jan 13 20:43:53.943576 containerd[1485]: time="2025-01-13T20:43:53.943516543Z" level=info msg="StartContainer for \"c82a5c605e7dd5356fb19040cacefcc3e345521f5c605fab0915252005d7cde6\" returns successfully" Jan 13 20:43:54.284332 kubelet[2663]: I0113 20:43:54.284278 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-27m4p" podStartSLOduration=1.109663771 podStartE2EDuration="3.284217138s" podCreationTimestamp="2025-01-13 20:43:51 +0000 UTC" firstStartedPulling="2025-01-13 20:43:51.665353586 +0000 UTC m=+16.550549705" lastFinishedPulling="2025-01-13 20:43:53.839906943 +0000 UTC m=+18.725103072" observedRunningTime="2025-01-13 20:43:54.283940567 +0000 UTC m=+19.169136696" watchObservedRunningTime="2025-01-13 20:43:54.284217138 +0000 UTC m=+19.169413277" Jan 13 20:43:56.706677 kubelet[2663]: I0113 20:43:56.706634 2663 topology_manager.go:215] "Topology Admit Handler" podUID="0f2f6227-062f-4dc9-96f7-8d30a2b30b56" podNamespace="calico-system" podName="calico-typha-6665bbd8d-dmvpc" Jan 13 20:43:56.718703 systemd[1]: Created slice kubepods-besteffort-pod0f2f6227_062f_4dc9_96f7_8d30a2b30b56.slice - libcontainer container kubepods-besteffort-pod0f2f6227_062f_4dc9_96f7_8d30a2b30b56.slice. Jan 13 20:43:56.740813 kubelet[2663]: I0113 20:43:56.740569 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f2f6227-062f-4dc9-96f7-8d30a2b30b56-typha-certs\") pod \"calico-typha-6665bbd8d-dmvpc\" (UID: \"0f2f6227-062f-4dc9-96f7-8d30a2b30b56\") " pod="calico-system/calico-typha-6665bbd8d-dmvpc" Jan 13 20:43:56.740813 kubelet[2663]: I0113 20:43:56.740643 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bd2\" (UniqueName: \"kubernetes.io/projected/0f2f6227-062f-4dc9-96f7-8d30a2b30b56-kube-api-access-66bd2\") pod \"calico-typha-6665bbd8d-dmvpc\" (UID: \"0f2f6227-062f-4dc9-96f7-8d30a2b30b56\") " pod="calico-system/calico-typha-6665bbd8d-dmvpc" Jan 13 20:43:56.741071 kubelet[2663]: I0113 20:43:56.741051 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f2f6227-062f-4dc9-96f7-8d30a2b30b56-tigera-ca-bundle\") pod \"calico-typha-6665bbd8d-dmvpc\" (UID: \"0f2f6227-062f-4dc9-96f7-8d30a2b30b56\") " pod="calico-system/calico-typha-6665bbd8d-dmvpc" Jan 13 20:43:56.766303 kubelet[2663]: I0113 20:43:56.766255 2663 topology_manager.go:215] "Topology Admit Handler" podUID="a6a9ff88-aebb-4f7b-b225-a408ba571402" podNamespace="calico-system" podName="calico-node-cqqtj" Jan 13 20:43:56.776576 systemd[1]: Created slice kubepods-besteffort-poda6a9ff88_aebb_4f7b_b225_a408ba571402.slice - libcontainer container kubepods-besteffort-poda6a9ff88_aebb_4f7b_b225_a408ba571402.slice. Jan 13 20:43:56.842052 kubelet[2663]: I0113 20:43:56.842000 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a6a9ff88-aebb-4f7b-b225-a408ba571402-node-certs\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842052 kubelet[2663]: I0113 20:43:56.842053 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-flexvol-driver-host\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842221 kubelet[2663]: I0113 20:43:56.842138 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-lib-modules\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842221 kubelet[2663]: I0113 20:43:56.842183 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-var-run-calico\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842221 kubelet[2663]: I0113 20:43:56.842208 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdzm\" (UniqueName: \"kubernetes.io/projected/a6a9ff88-aebb-4f7b-b225-a408ba571402-kube-api-access-gtdzm\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842291 kubelet[2663]: I0113 20:43:56.842249 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-xtables-lock\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842326 kubelet[2663]: I0113 20:43:56.842288 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-policysync\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842366 kubelet[2663]: I0113 20:43:56.842335 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-var-lib-calico\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842366 kubelet[2663]: I0113 20:43:56.842361 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-cni-bin-dir\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842417 kubelet[2663]: I0113 20:43:56.842396 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a9ff88-aebb-4f7b-b225-a408ba571402-tigera-ca-bundle\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842443 kubelet[2663]: I0113 20:43:56.842419 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-cni-log-dir\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.842472 kubelet[2663]: I0113 20:43:56.842456 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a6a9ff88-aebb-4f7b-b225-a408ba571402-cni-net-dir\") pod \"calico-node-cqqtj\" (UID: \"a6a9ff88-aebb-4f7b-b225-a408ba571402\") " pod="calico-system/calico-node-cqqtj" Jan 13 20:43:56.900925 kubelet[2663]: I0113 20:43:56.900617 2663 topology_manager.go:215] "Topology Admit Handler" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" podNamespace="calico-system" podName="csi-node-driver-74fml" Jan 13 20:43:56.901224 kubelet[2663]: E0113 20:43:56.901192 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:43:56.943547 kubelet[2663]: I0113 20:43:56.943410 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0740e80e-5301-4176-ac9e-7bf36ee863df-kubelet-dir\") pod \"csi-node-driver-74fml\" (UID: \"0740e80e-5301-4176-ac9e-7bf36ee863df\") " pod="calico-system/csi-node-driver-74fml" Jan 13 20:43:56.944395 kubelet[2663]: I0113 20:43:56.943787 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0740e80e-5301-4176-ac9e-7bf36ee863df-socket-dir\") pod \"csi-node-driver-74fml\" (UID: \"0740e80e-5301-4176-ac9e-7bf36ee863df\") " pod="calico-system/csi-node-driver-74fml" Jan 13 20:43:56.944395 kubelet[2663]: I0113 20:43:56.943895 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpgf\" (UniqueName: \"kubernetes.io/projected/0740e80e-5301-4176-ac9e-7bf36ee863df-kube-api-access-2kpgf\") pod \"csi-node-driver-74fml\" (UID: \"0740e80e-5301-4176-ac9e-7bf36ee863df\") " pod="calico-system/csi-node-driver-74fml" Jan 13 20:43:56.944395 kubelet[2663]: I0113 20:43:56.943935 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0740e80e-5301-4176-ac9e-7bf36ee863df-registration-dir\") pod \"csi-node-driver-74fml\" (UID: \"0740e80e-5301-4176-ac9e-7bf36ee863df\") " pod="calico-system/csi-node-driver-74fml" Jan 13 20:43:56.944395 kubelet[2663]: I0113 20:43:56.944032 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0740e80e-5301-4176-ac9e-7bf36ee863df-varrun\") pod \"csi-node-driver-74fml\" (UID: \"0740e80e-5301-4176-ac9e-7bf36ee863df\") " pod="calico-system/csi-node-driver-74fml" Jan 13 20:43:56.945141 kubelet[2663]: E0113 20:43:56.945128 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.945297 kubelet[2663]: W0113 20:43:56.945216 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.945297 kubelet[2663]: E0113 20:43:56.945241 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.945602 kubelet[2663]: E0113 20:43:56.945582 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.945723 kubelet[2663]: W0113 20:43:56.945650 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.945723 kubelet[2663]: E0113 20:43:56.945676 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.946060 kubelet[2663]: E0113 20:43:56.946049 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.946178 kubelet[2663]: W0113 20:43:56.946133 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.946322 kubelet[2663]: E0113 20:43:56.946265 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.946731 kubelet[2663]: E0113 20:43:56.946662 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.946731 kubelet[2663]: W0113 20:43:56.946675 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.946731 kubelet[2663]: E0113 20:43:56.946693 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.948157 kubelet[2663]: E0113 20:43:56.948127 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.948157 kubelet[2663]: W0113 20:43:56.948145 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.948249 kubelet[2663]: E0113 20:43:56.948233 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.948454 kubelet[2663]: E0113 20:43:56.948436 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.948454 kubelet[2663]: W0113 20:43:56.948451 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.948539 kubelet[2663]: E0113 20:43:56.948498 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.948789 kubelet[2663]: E0113 20:43:56.948727 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.948789 kubelet[2663]: W0113 20:43:56.948743 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.948993 kubelet[2663]: E0113 20:43:56.948812 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.949140 kubelet[2663]: E0113 20:43:56.949064 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.949140 kubelet[2663]: W0113 20:43:56.949072 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.949140 kubelet[2663]: E0113 20:43:56.949136 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.949393 kubelet[2663]: E0113 20:43:56.949336 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.949393 kubelet[2663]: W0113 20:43:56.949344 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.949803 kubelet[2663]: E0113 20:43:56.949394 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.949803 kubelet[2663]: E0113 20:43:56.949644 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.949803 kubelet[2663]: W0113 20:43:56.949666 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.949803 kubelet[2663]: E0113 20:43:56.949713 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.950219 kubelet[2663]: E0113 20:43:56.949866 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.950219 kubelet[2663]: W0113 20:43:56.949874 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.950219 kubelet[2663]: E0113 20:43:56.949917 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.950219 kubelet[2663]: E0113 20:43:56.950090 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.950219 kubelet[2663]: W0113 20:43:56.950097 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.950219 kubelet[2663]: E0113 20:43:56.950174 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.950850 kubelet[2663]: E0113 20:43:56.950347 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.950850 kubelet[2663]: W0113 20:43:56.950354 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.950850 kubelet[2663]: E0113 20:43:56.950478 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.950850 kubelet[2663]: E0113 20:43:56.950611 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.950850 kubelet[2663]: W0113 20:43:56.950618 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.950850 kubelet[2663]: E0113 20:43:56.950741 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.950978 kubelet[2663]: E0113 20:43:56.950880 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.950978 kubelet[2663]: W0113 20:43:56.950887 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.951064 kubelet[2663]: E0113 20:43:56.950984 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.951711 kubelet[2663]: E0113 20:43:56.951137 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.951711 kubelet[2663]: W0113 20:43:56.951146 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.951711 kubelet[2663]: E0113 20:43:56.951277 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.951711 kubelet[2663]: E0113 20:43:56.951450 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.951711 kubelet[2663]: W0113 20:43:56.951459 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.951711 kubelet[2663]: E0113 20:43:56.951482 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.951950 kubelet[2663]: E0113 20:43:56.951775 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.951950 kubelet[2663]: W0113 20:43:56.951782 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.951950 kubelet[2663]: E0113 20:43:56.951802 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.953387 kubelet[2663]: E0113 20:43:56.952078 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.953387 kubelet[2663]: W0113 20:43:56.952088 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.953387 kubelet[2663]: E0113 20:43:56.952149 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.953387 kubelet[2663]: E0113 20:43:56.952386 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.953387 kubelet[2663]: W0113 20:43:56.952395 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.953387 kubelet[2663]: E0113 20:43:56.952466 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.953387 kubelet[2663]: E0113 20:43:56.952670 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.953387 kubelet[2663]: W0113 20:43:56.952677 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.953387 kubelet[2663]: E0113 20:43:56.952705 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.953387 kubelet[2663]: E0113 20:43:56.952949 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.953670 kubelet[2663]: W0113 20:43:56.952977 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.953670 kubelet[2663]: E0113 20:43:56.952993 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.953670 kubelet[2663]: E0113 20:43:56.953258 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.953670 kubelet[2663]: W0113 20:43:56.953266 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.953670 kubelet[2663]: E0113 20:43:56.953276 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:56.960650 kubelet[2663]: E0113 20:43:56.960475 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:56.960650 kubelet[2663]: W0113 20:43:56.960493 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:56.960650 kubelet[2663]: E0113 20:43:56.960512 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.022777 kubelet[2663]: E0113 20:43:57.022734 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:57.023241 containerd[1485]: time="2025-01-13T20:43:57.023197177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6665bbd8d-dmvpc,Uid:0f2f6227-062f-4dc9-96f7-8d30a2b30b56,Namespace:calico-system,Attempt:0,}" Jan 13 20:43:57.045361 kubelet[2663]: E0113 20:43:57.045333 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.045361 kubelet[2663]: W0113 20:43:57.045351 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.045503 kubelet[2663]: E0113 20:43:57.045372 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.045632 kubelet[2663]: E0113 20:43:57.045617 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.045632 kubelet[2663]: W0113 20:43:57.045628 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.045702 kubelet[2663]: E0113 20:43:57.045644 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.045964 kubelet[2663]: E0113 20:43:57.045942 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.045997 kubelet[2663]: W0113 20:43:57.045963 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.045997 kubelet[2663]: E0113 20:43:57.045992 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.046266 kubelet[2663]: E0113 20:43:57.046251 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.046266 kubelet[2663]: W0113 20:43:57.046262 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.046340 kubelet[2663]: E0113 20:43:57.046277 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.046475 kubelet[2663]: E0113 20:43:57.046462 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.046475 kubelet[2663]: W0113 20:43:57.046471 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.046531 kubelet[2663]: E0113 20:43:57.046486 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.046721 kubelet[2663]: E0113 20:43:57.046692 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.046721 kubelet[2663]: W0113 20:43:57.046719 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.046788 kubelet[2663]: E0113 20:43:57.046737 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.047054 kubelet[2663]: E0113 20:43:57.047026 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.047054 kubelet[2663]: W0113 20:43:57.047047 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.047186 kubelet[2663]: E0113 20:43:57.047087 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.047327 kubelet[2663]: E0113 20:43:57.047291 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.047327 kubelet[2663]: W0113 20:43:57.047305 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.047451 kubelet[2663]: E0113 20:43:57.047435 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.047560 kubelet[2663]: E0113 20:43:57.047547 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.047560 kubelet[2663]: W0113 20:43:57.047557 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.047611 kubelet[2663]: E0113 20:43:57.047571 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.048082 kubelet[2663]: E0113 20:43:57.047999 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.048082 kubelet[2663]: W0113 20:43:57.048024 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.048082 kubelet[2663]: E0113 20:43:57.048064 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.048381 kubelet[2663]: E0113 20:43:57.048346 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.048381 kubelet[2663]: W0113 20:43:57.048358 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.048381 kubelet[2663]: E0113 20:43:57.048383 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.048624 kubelet[2663]: E0113 20:43:57.048605 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.048624 kubelet[2663]: W0113 20:43:57.048618 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.048961 kubelet[2663]: E0113 20:43:57.048943 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.049072 kubelet[2663]: E0113 20:43:57.049047 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.049072 kubelet[2663]: W0113 20:43:57.049055 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.049212 kubelet[2663]: E0113 20:43:57.049179 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.049592 kubelet[2663]: E0113 20:43:57.049367 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.049592 kubelet[2663]: W0113 20:43:57.049384 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.049592 kubelet[2663]: E0113 20:43:57.049472 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.049685 kubelet[2663]: E0113 20:43:57.049665 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.049685 kubelet[2663]: W0113 20:43:57.049682 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.049772 kubelet[2663]: E0113 20:43:57.049756 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.049968 kubelet[2663]: E0113 20:43:57.049953 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.049968 kubelet[2663]: W0113 20:43:57.049964 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.050076 kubelet[2663]: E0113 20:43:57.049977 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.050515 containerd[1485]: time="2025-01-13T20:43:57.050272889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:57.050515 containerd[1485]: time="2025-01-13T20:43:57.050330019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:57.050515 containerd[1485]: time="2025-01-13T20:43:57.050340990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:57.050515 containerd[1485]: time="2025-01-13T20:43:57.050409501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:57.050716 kubelet[2663]: E0113 20:43:57.050540 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.050716 kubelet[2663]: W0113 20:43:57.050548 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.050716 kubelet[2663]: E0113 20:43:57.050562 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.051192 kubelet[2663]: E0113 20:43:57.051163 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.051192 kubelet[2663]: W0113 20:43:57.051174 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.051753 kubelet[2663]: E0113 20:43:57.051732 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.051792 kubelet[2663]: E0113 20:43:57.051778 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.051792 kubelet[2663]: W0113 20:43:57.051784 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.051924 kubelet[2663]: E0113 20:43:57.051856 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.053478 kubelet[2663]: E0113 20:43:57.053190 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.053478 kubelet[2663]: W0113 20:43:57.053207 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.053478 kubelet[2663]: E0113 20:43:57.053243 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.053564 kubelet[2663]: E0113 20:43:57.053502 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.053564 kubelet[2663]: W0113 20:43:57.053510 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.053564 kubelet[2663]: E0113 20:43:57.053565 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.054436 kubelet[2663]: E0113 20:43:57.053849 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.054436 kubelet[2663]: W0113 20:43:57.053859 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.054436 kubelet[2663]: E0113 20:43:57.053951 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.054436 kubelet[2663]: E0113 20:43:57.054121 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.054436 kubelet[2663]: W0113 20:43:57.054129 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.054436 kubelet[2663]: E0113 20:43:57.054142 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.055060 kubelet[2663]: E0113 20:43:57.054794 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.055060 kubelet[2663]: W0113 20:43:57.054806 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.055060 kubelet[2663]: E0113 20:43:57.054833 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.055277 kubelet[2663]: E0113 20:43:57.055196 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.055277 kubelet[2663]: W0113 20:43:57.055205 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.055277 kubelet[2663]: E0113 20:43:57.055220 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.055468 kubelet[2663]: E0113 20:43:57.055444 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:43:57.055468 kubelet[2663]: W0113 20:43:57.055456 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:43:57.055468 kubelet[2663]: E0113 20:43:57.055467 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:43:57.069165 systemd[1]: Started cri-containerd-1119586019345ed3ef7935e5a2b19e022e08d21914853e6883bc9bcdcf1fb8e0.scope - libcontainer container 1119586019345ed3ef7935e5a2b19e022e08d21914853e6883bc9bcdcf1fb8e0. Jan 13 20:43:57.080031 kubelet[2663]: E0113 20:43:57.079904 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:57.080771 containerd[1485]: time="2025-01-13T20:43:57.080280821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqqtj,Uid:a6a9ff88-aebb-4f7b-b225-a408ba571402,Namespace:calico-system,Attempt:0,}" Jan 13 20:43:57.109303 containerd[1485]: time="2025-01-13T20:43:57.109250122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6665bbd8d-dmvpc,Uid:0f2f6227-062f-4dc9-96f7-8d30a2b30b56,Namespace:calico-system,Attempt:0,} returns sandbox id \"1119586019345ed3ef7935e5a2b19e022e08d21914853e6883bc9bcdcf1fb8e0\"" Jan 13 20:43:57.110188 kubelet[2663]: E0113 20:43:57.110158 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:57.110733 containerd[1485]: time="2025-01-13T20:43:57.110594519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:57.110733 containerd[1485]: time="2025-01-13T20:43:57.110676657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:57.110733 containerd[1485]: time="2025-01-13T20:43:57.110689270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:57.110943 containerd[1485]: time="2025-01-13T20:43:57.110834549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:57.111608 containerd[1485]: time="2025-01-13T20:43:57.111568466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:43:57.132181 systemd[1]: Started cri-containerd-5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261.scope - libcontainer container 5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261. Jan 13 20:43:57.153542 containerd[1485]: time="2025-01-13T20:43:57.153498296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqqtj,Uid:a6a9ff88-aebb-4f7b-b225-a408ba571402,Namespace:calico-system,Attempt:0,} returns sandbox id \"5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261\"" Jan 13 20:43:57.154134 kubelet[2663]: E0113 20:43:57.154101 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:43:58.214975 kubelet[2663]: E0113 20:43:58.214923 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:43:58.545294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1926647970.mount: Deactivated successfully. Jan 13 20:43:59.178186 containerd[1485]: time="2025-01-13T20:43:59.178135319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:59.179124 containerd[1485]: time="2025-01-13T20:43:59.179074115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 20:43:59.180409 containerd[1485]: time="2025-01-13T20:43:59.180365095Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:59.182792 containerd[1485]: time="2025-01-13T20:43:59.182754997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:59.183379 containerd[1485]: time="2025-01-13T20:43:59.183339576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.071670507s" Jan 13 20:43:59.183379 containerd[1485]: time="2025-01-13T20:43:59.183375485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:43:59.184137 containerd[1485]: time="2025-01-13T20:43:59.184002134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:43:59.190700 containerd[1485]: time="2025-01-13T20:43:59.190547457Z" level=info msg="CreateContainer within sandbox \"1119586019345ed3ef7935e5a2b19e022e08d21914853e6883bc9bcdcf1fb8e0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:43:59.205923 containerd[1485]: time="2025-01-13T20:43:59.205881316Z" level=info msg="CreateContainer within sandbox \"1119586019345ed3ef7935e5a2b19e022e08d21914853e6883bc9bcdcf1fb8e0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8bae944b8c12765d3e7481566031036def8e17bfda422bec295df5eab10cec00\"" Jan 13 20:43:59.206405 containerd[1485]: time="2025-01-13T20:43:59.206382394Z" level=info msg="StartContainer for \"8bae944b8c12765d3e7481566031036def8e17bfda422bec295df5eab10cec00\"" Jan 13 20:43:59.235164 systemd[1]: Started cri-containerd-8bae944b8c12765d3e7481566031036def8e17bfda422bec295df5eab10cec00.scope - libcontainer container 8bae944b8c12765d3e7481566031036def8e17bfda422bec295df5eab10cec00. Jan 13 20:43:59.286975 containerd[1485]: time="2025-01-13T20:43:59.286935778Z" level=info msg="StartContainer for \"8bae944b8c12765d3e7481566031036def8e17bfda422bec295df5eab10cec00\" returns successfully" Jan 13 20:44:00.214880 kubelet[2663]: E0113 20:44:00.214841 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:00.268233 kubelet[2663]: E0113 20:44:00.268171 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:00.356995 kubelet[2663]: E0113 20:44:00.356965 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.356995 kubelet[2663]: W0113 20:44:00.356986 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.357160 kubelet[2663]: E0113 20:44:00.357007 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.357226 kubelet[2663]: E0113 20:44:00.357207 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.357226 kubelet[2663]: W0113 20:44:00.357218 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.357290 kubelet[2663]: E0113 20:44:00.357233 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.357445 kubelet[2663]: E0113 20:44:00.357432 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.357445 kubelet[2663]: W0113 20:44:00.357440 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.357595 kubelet[2663]: E0113 20:44:00.357450 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.357644 kubelet[2663]: E0113 20:44:00.357621 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.357644 kubelet[2663]: W0113 20:44:00.357631 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.357644 kubelet[2663]: E0113 20:44:00.357641 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.357828 kubelet[2663]: E0113 20:44:00.357802 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.357828 kubelet[2663]: W0113 20:44:00.357810 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.357828 kubelet[2663]: E0113 20:44:00.357819 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.358001 kubelet[2663]: E0113 20:44:00.357982 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.358001 kubelet[2663]: W0113 20:44:00.357990 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.358001 kubelet[2663]: E0113 20:44:00.358001 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.358251 kubelet[2663]: E0113 20:44:00.358231 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.358251 kubelet[2663]: W0113 20:44:00.358242 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.358251 kubelet[2663]: E0113 20:44:00.358254 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.358439 kubelet[2663]: E0113 20:44:00.358422 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.358439 kubelet[2663]: W0113 20:44:00.358431 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.358439 kubelet[2663]: E0113 20:44:00.358441 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.358616 kubelet[2663]: E0113 20:44:00.358601 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.358616 kubelet[2663]: W0113 20:44:00.358608 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.358616 kubelet[2663]: E0113 20:44:00.358617 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.358791 kubelet[2663]: E0113 20:44:00.358774 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.358791 kubelet[2663]: W0113 20:44:00.358783 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.358791 kubelet[2663]: E0113 20:44:00.358793 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.358966 kubelet[2663]: E0113 20:44:00.358949 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.358966 kubelet[2663]: W0113 20:44:00.358958 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.358966 kubelet[2663]: E0113 20:44:00.358967 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.359156 kubelet[2663]: E0113 20:44:00.359136 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.359156 kubelet[2663]: W0113 20:44:00.359147 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.359156 kubelet[2663]: E0113 20:44:00.359155 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.359397 kubelet[2663]: E0113 20:44:00.359373 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.359397 kubelet[2663]: W0113 20:44:00.359387 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.359474 kubelet[2663]: E0113 20:44:00.359416 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.359609 kubelet[2663]: E0113 20:44:00.359584 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.359609 kubelet[2663]: W0113 20:44:00.359595 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.359609 kubelet[2663]: E0113 20:44:00.359604 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.359784 kubelet[2663]: E0113 20:44:00.359765 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.359784 kubelet[2663]: W0113 20:44:00.359774 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.359784 kubelet[2663]: E0113 20:44:00.359783 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.373072 kubelet[2663]: E0113 20:44:00.373047 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.373072 kubelet[2663]: W0113 20:44:00.373060 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.373072 kubelet[2663]: E0113 20:44:00.373072 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.373294 kubelet[2663]: E0113 20:44:00.373278 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.373294 kubelet[2663]: W0113 20:44:00.373288 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.373376 kubelet[2663]: E0113 20:44:00.373304 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.373551 kubelet[2663]: E0113 20:44:00.373534 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.373551 kubelet[2663]: W0113 20:44:00.373544 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.373602 kubelet[2663]: E0113 20:44:00.373557 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.373749 kubelet[2663]: E0113 20:44:00.373731 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.373749 kubelet[2663]: W0113 20:44:00.373740 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.373810 kubelet[2663]: E0113 20:44:00.373754 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.373936 kubelet[2663]: E0113 20:44:00.373920 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.373936 kubelet[2663]: W0113 20:44:00.373930 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.373981 kubelet[2663]: E0113 20:44:00.373941 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.374115 kubelet[2663]: E0113 20:44:00.374101 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.374115 kubelet[2663]: W0113 20:44:00.374111 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.374174 kubelet[2663]: E0113 20:44:00.374124 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.374307 kubelet[2663]: E0113 20:44:00.374292 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.374307 kubelet[2663]: W0113 20:44:00.374302 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.374360 kubelet[2663]: E0113 20:44:00.374315 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.374557 kubelet[2663]: E0113 20:44:00.374538 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.374557 kubelet[2663]: W0113 20:44:00.374551 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.374620 kubelet[2663]: E0113 20:44:00.374567 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.374733 kubelet[2663]: E0113 20:44:00.374715 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.374733 kubelet[2663]: W0113 20:44:00.374725 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.374787 kubelet[2663]: E0113 20:44:00.374738 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.374920 kubelet[2663]: E0113 20:44:00.374903 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.374920 kubelet[2663]: W0113 20:44:00.374914 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.374971 kubelet[2663]: E0113 20:44:00.374927 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.375110 kubelet[2663]: E0113 20:44:00.375096 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.375110 kubelet[2663]: W0113 20:44:00.375106 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.375165 kubelet[2663]: E0113 20:44:00.375129 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.375288 kubelet[2663]: E0113 20:44:00.375274 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.375288 kubelet[2663]: W0113 20:44:00.375284 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.375339 kubelet[2663]: E0113 20:44:00.375296 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.375475 kubelet[2663]: E0113 20:44:00.375461 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.375475 kubelet[2663]: W0113 20:44:00.375470 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.375538 kubelet[2663]: E0113 20:44:00.375483 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.375663 kubelet[2663]: E0113 20:44:00.375646 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.375663 kubelet[2663]: W0113 20:44:00.375656 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.375720 kubelet[2663]: E0113 20:44:00.375669 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.375845 kubelet[2663]: E0113 20:44:00.375828 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.375845 kubelet[2663]: W0113 20:44:00.375838 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.375899 kubelet[2663]: E0113 20:44:00.375850 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.376060 kubelet[2663]: E0113 20:44:00.376044 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.376060 kubelet[2663]: W0113 20:44:00.376057 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.376113 kubelet[2663]: E0113 20:44:00.376073 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.376289 kubelet[2663]: E0113 20:44:00.376274 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.376289 kubelet[2663]: W0113 20:44:00.376284 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.376345 kubelet[2663]: E0113 20:44:00.376297 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.376472 kubelet[2663]: E0113 20:44:00.376457 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:44:00.376472 kubelet[2663]: W0113 20:44:00.376468 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:44:00.376523 kubelet[2663]: E0113 20:44:00.376478 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:44:00.807597 containerd[1485]: time="2025-01-13T20:44:00.807540146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:00.808357 containerd[1485]: time="2025-01-13T20:44:00.808305860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 20:44:00.809514 containerd[1485]: time="2025-01-13T20:44:00.809480326Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:00.811421 containerd[1485]: time="2025-01-13T20:44:00.811381401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:00.811974 containerd[1485]: time="2025-01-13T20:44:00.811938916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.627889702s" Jan 13 20:44:00.811974 containerd[1485]: time="2025-01-13T20:44:00.811963573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:44:00.813195 containerd[1485]: time="2025-01-13T20:44:00.813157045Z" level=info msg="CreateContainer within sandbox \"5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:44:00.828039 containerd[1485]: time="2025-01-13T20:44:00.827992658Z" level=info msg="CreateContainer within sandbox \"5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c\"" Jan 13 20:44:00.828414 containerd[1485]: time="2025-01-13T20:44:00.828387873Z" level=info msg="StartContainer for \"10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c\"" Jan 13 20:44:00.862144 systemd[1]: Started cri-containerd-10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c.scope - libcontainer container 10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c. Jan 13 20:44:00.904047 systemd[1]: cri-containerd-10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c.scope: Deactivated successfully. Jan 13 20:44:00.944285 containerd[1485]: time="2025-01-13T20:44:00.944222341Z" level=info msg="StartContainer for \"10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c\" returns successfully" Jan 13 20:44:01.189113 systemd[1]: run-containerd-runc-k8s.io-10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c-runc.WNjzl2.mount: Deactivated successfully. Jan 13 20:44:01.189238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c-rootfs.mount: Deactivated successfully. Jan 13 20:44:01.270367 kubelet[2663]: I0113 20:44:01.270340 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:44:01.270858 kubelet[2663]: E0113 20:44:01.270653 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:01.271095 kubelet[2663]: E0113 20:44:01.271081 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:01.316583 kubelet[2663]: I0113 20:44:01.316210 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6665bbd8d-dmvpc" podStartSLOduration=3.243677888 podStartE2EDuration="5.316172403s" podCreationTimestamp="2025-01-13 20:43:56 +0000 UTC" firstStartedPulling="2025-01-13 20:43:57.111175633 +0000 UTC m=+21.996371762" lastFinishedPulling="2025-01-13 20:43:59.183670148 +0000 UTC m=+24.068866277" observedRunningTime="2025-01-13 20:44:00.301002029 +0000 UTC m=+25.186198158" watchObservedRunningTime="2025-01-13 20:44:01.316172403 +0000 UTC m=+26.201368532" Jan 13 20:44:01.322448 containerd[1485]: time="2025-01-13T20:44:01.322390072Z" level=info msg="shim disconnected" id=10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c namespace=k8s.io Jan 13 20:44:01.322448 containerd[1485]: time="2025-01-13T20:44:01.322445759Z" level=warning msg="cleaning up after shim disconnected" id=10ea653d6515a49f5d9bab52ba6f23dcafba6375e73e42a3ca80bb98004ec09c namespace=k8s.io Jan 13 20:44:01.322448 containerd[1485]: time="2025-01-13T20:44:01.322454416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:02.214956 kubelet[2663]: E0113 20:44:02.214899 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:02.273683 kubelet[2663]: E0113 20:44:02.273652 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:02.274561 containerd[1485]: time="2025-01-13T20:44:02.274512217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:44:03.532330 kubelet[2663]: I0113 20:44:03.532287 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:44:03.533367 kubelet[2663]: E0113 20:44:03.532974 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:04.214709 kubelet[2663]: E0113 20:44:04.214670 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:04.276801 kubelet[2663]: E0113 20:44:04.276765 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:04.875042 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:37160.service - OpenSSH per-connection server daemon (10.0.0.1:37160). Jan 13 20:44:04.959860 sshd[3363]: Accepted publickey for core from 10.0.0.1 port 37160 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:04.968166 sshd-session[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:04.984748 systemd-logind[1473]: New session 8 of user core. Jan 13 20:44:04.999176 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:44:05.340242 sshd[3365]: Connection closed by 10.0.0.1 port 37160 Jan 13 20:44:05.340250 sshd-session[3363]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:05.346398 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:44:05.347333 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:37160.service: Deactivated successfully. Jan 13 20:44:05.350088 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:44:05.352486 systemd-logind[1473]: Removed session 8. Jan 13 20:44:06.214222 kubelet[2663]: E0113 20:44:06.214175 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:06.618863 containerd[1485]: time="2025-01-13T20:44:06.618826439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:06.619721 containerd[1485]: time="2025-01-13T20:44:06.619652201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:44:06.620952 containerd[1485]: time="2025-01-13T20:44:06.620927480Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:06.623305 containerd[1485]: time="2025-01-13T20:44:06.623244775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:06.623856 containerd[1485]: time="2025-01-13T20:44:06.623823467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.349270662s" Jan 13 20:44:06.623856 containerd[1485]: time="2025-01-13T20:44:06.623848004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:44:06.627739 containerd[1485]: time="2025-01-13T20:44:06.627699820Z" level=info msg="CreateContainer within sandbox \"5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:44:06.642743 containerd[1485]: time="2025-01-13T20:44:06.642687307Z" level=info msg="CreateContainer within sandbox \"5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076\"" Jan 13 20:44:06.643306 containerd[1485]: time="2025-01-13T20:44:06.643216435Z" level=info msg="StartContainer for \"cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076\"" Jan 13 20:44:06.667568 systemd[1]: run-containerd-runc-k8s.io-cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076-runc.ZxELKD.mount: Deactivated successfully. Jan 13 20:44:06.675226 systemd[1]: Started cri-containerd-cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076.scope - libcontainer container cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076. Jan 13 20:44:06.781774 containerd[1485]: time="2025-01-13T20:44:06.781721042Z" level=info msg="StartContainer for \"cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076\" returns successfully" Jan 13 20:44:07.605208 kubelet[2663]: E0113 20:44:07.605164 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:07.645466 containerd[1485]: time="2025-01-13T20:44:07.645362482Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:44:07.648389 systemd[1]: cri-containerd-cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076.scope: Deactivated successfully. Jan 13 20:44:07.672277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076-rootfs.mount: Deactivated successfully. Jan 13 20:44:07.681345 containerd[1485]: time="2025-01-13T20:44:07.681286154Z" level=info msg="shim disconnected" id=cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076 namespace=k8s.io Jan 13 20:44:07.681345 containerd[1485]: time="2025-01-13T20:44:07.681342422Z" level=warning msg="cleaning up after shim disconnected" id=cd6462759bef683397e4c7b33bfc8ce95f1811eeb8b77aebef5ce767918ef076 namespace=k8s.io Jan 13 20:44:07.681531 containerd[1485]: time="2025-01-13T20:44:07.681351479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:44:07.697085 kubelet[2663]: I0113 20:44:07.696871 2663 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:44:07.718765 kubelet[2663]: I0113 20:44:07.718722 2663 topology_manager.go:215] "Topology Admit Handler" podUID="24356970-d4ba-4d53-b9ce-72c96ee695b4" podNamespace="kube-system" podName="coredns-76f75df574-z9f87" Jan 13 20:44:07.720232 kubelet[2663]: I0113 20:44:07.720088 2663 topology_manager.go:215] "Topology Admit Handler" podUID="acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83" podNamespace="kube-system" podName="coredns-76f75df574-qkvcn" Jan 13 20:44:07.724540 kubelet[2663]: I0113 20:44:07.724498 2663 topology_manager.go:215] "Topology Admit Handler" podUID="5379735e-d3c8-481c-ba97-384db8752ee4" podNamespace="calico-system" podName="calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:07.724749 kubelet[2663]: I0113 20:44:07.724692 2663 topology_manager.go:215] "Topology Admit Handler" podUID="0d99061e-8895-489c-be69-03c406284aa9" podNamespace="calico-apiserver" podName="calico-apiserver-c845b497c-psj2v" Jan 13 20:44:07.725102 kubelet[2663]: I0113 20:44:07.725066 2663 topology_manager.go:215] "Topology Admit Handler" podUID="10adf362-b95c-4368-9f5a-3041f3e43b8c" podNamespace="calico-apiserver" podName="calico-apiserver-c845b497c-48828" Jan 13 20:44:07.732767 systemd[1]: Created slice kubepods-burstable-pod24356970_d4ba_4d53_b9ce_72c96ee695b4.slice - libcontainer container kubepods-burstable-pod24356970_d4ba_4d53_b9ce_72c96ee695b4.slice. Jan 13 20:44:07.740214 systemd[1]: Created slice kubepods-burstable-podacbb54e8_9cf8_4618_abd6_3a9f5bc9cb83.slice - libcontainer container kubepods-burstable-podacbb54e8_9cf8_4618_abd6_3a9f5bc9cb83.slice. Jan 13 20:44:07.747354 systemd[1]: Created slice kubepods-besteffort-pod0d99061e_8895_489c_be69_03c406284aa9.slice - libcontainer container kubepods-besteffort-pod0d99061e_8895_489c_be69_03c406284aa9.slice. Jan 13 20:44:07.752788 systemd[1]: Created slice kubepods-besteffort-pod5379735e_d3c8_481c_ba97_384db8752ee4.slice - libcontainer container kubepods-besteffort-pod5379735e_d3c8_481c_ba97_384db8752ee4.slice. Jan 13 20:44:07.759144 systemd[1]: Created slice kubepods-besteffort-pod10adf362_b95c_4368_9f5a_3041f3e43b8c.slice - libcontainer container kubepods-besteffort-pod10adf362_b95c_4368_9f5a_3041f3e43b8c.slice. Jan 13 20:44:07.836917 kubelet[2663]: I0113 20:44:07.836854 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtxtq\" (UniqueName: \"kubernetes.io/projected/10adf362-b95c-4368-9f5a-3041f3e43b8c-kube-api-access-mtxtq\") pod \"calico-apiserver-c845b497c-48828\" (UID: \"10adf362-b95c-4368-9f5a-3041f3e43b8c\") " pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:07.836917 kubelet[2663]: I0113 20:44:07.836908 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d99061e-8895-489c-be69-03c406284aa9-calico-apiserver-certs\") pod \"calico-apiserver-c845b497c-psj2v\" (UID: \"0d99061e-8895-489c-be69-03c406284aa9\") " pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:07.836917 kubelet[2663]: I0113 20:44:07.836931 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbm6\" (UniqueName: \"kubernetes.io/projected/0d99061e-8895-489c-be69-03c406284aa9-kube-api-access-njbm6\") pod \"calico-apiserver-c845b497c-psj2v\" (UID: \"0d99061e-8895-489c-be69-03c406284aa9\") " pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:07.837160 kubelet[2663]: I0113 20:44:07.837005 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6cbp\" (UniqueName: \"kubernetes.io/projected/5379735e-d3c8-481c-ba97-384db8752ee4-kube-api-access-w6cbp\") pod \"calico-kube-controllers-69dc874945-vclf2\" (UID: \"5379735e-d3c8-481c-ba97-384db8752ee4\") " pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:07.837160 kubelet[2663]: I0113 20:44:07.837124 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5h5b\" (UniqueName: \"kubernetes.io/projected/24356970-d4ba-4d53-b9ce-72c96ee695b4-kube-api-access-q5h5b\") pod \"coredns-76f75df574-z9f87\" (UID: \"24356970-d4ba-4d53-b9ce-72c96ee695b4\") " pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:07.837240 kubelet[2663]: I0113 20:44:07.837167 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5379735e-d3c8-481c-ba97-384db8752ee4-tigera-ca-bundle\") pod \"calico-kube-controllers-69dc874945-vclf2\" (UID: \"5379735e-d3c8-481c-ba97-384db8752ee4\") " pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:07.837240 kubelet[2663]: I0113 20:44:07.837192 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbl58\" (UniqueName: \"kubernetes.io/projected/acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83-kube-api-access-zbl58\") pod \"coredns-76f75df574-qkvcn\" (UID: \"acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83\") " pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:07.837311 kubelet[2663]: I0113 20:44:07.837265 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24356970-d4ba-4d53-b9ce-72c96ee695b4-config-volume\") pod \"coredns-76f75df574-z9f87\" (UID: \"24356970-d4ba-4d53-b9ce-72c96ee695b4\") " pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:07.837467 kubelet[2663]: I0113 20:44:07.837355 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83-config-volume\") pod \"coredns-76f75df574-qkvcn\" (UID: \"acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83\") " pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:07.837467 kubelet[2663]: I0113 20:44:07.837376 2663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10adf362-b95c-4368-9f5a-3041f3e43b8c-calico-apiserver-certs\") pod \"calico-apiserver-c845b497c-48828\" (UID: \"10adf362-b95c-4368-9f5a-3041f3e43b8c\") " pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:08.037305 kubelet[2663]: E0113 20:44:08.037266 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:08.038111 containerd[1485]: time="2025-01-13T20:44:08.038061336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:0,}" Jan 13 20:44:08.043074 kubelet[2663]: E0113 20:44:08.043047 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:08.043979 containerd[1485]: time="2025-01-13T20:44:08.043944861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:0,}" Jan 13 20:44:08.051228 containerd[1485]: time="2025-01-13T20:44:08.051183146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:44:08.056932 containerd[1485]: time="2025-01-13T20:44:08.056896959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:0,}" Jan 13 20:44:08.062692 containerd[1485]: time="2025-01-13T20:44:08.062642392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:44:08.129792 containerd[1485]: time="2025-01-13T20:44:08.129186601Z" level=error msg="Failed to destroy network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.129792 containerd[1485]: time="2025-01-13T20:44:08.129588176Z" level=error msg="encountered an error cleaning up failed sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.129792 containerd[1485]: time="2025-01-13T20:44:08.129642599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.129977 kubelet[2663]: E0113 20:44:08.129942 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.130394 kubelet[2663]: E0113 20:44:08.130039 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:08.130394 kubelet[2663]: E0113 20:44:08.130075 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:08.130394 kubelet[2663]: E0113 20:44:08.130142 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z9f87" podUID="24356970-d4ba-4d53-b9ce-72c96ee695b4" Jan 13 20:44:08.149631 containerd[1485]: time="2025-01-13T20:44:08.149572254Z" level=error msg="Failed to destroy network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.149989 containerd[1485]: time="2025-01-13T20:44:08.149963950Z" level=error msg="encountered an error cleaning up failed sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.150069 containerd[1485]: time="2025-01-13T20:44:08.150048029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.150320 kubelet[2663]: E0113 20:44:08.150290 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.150399 kubelet[2663]: E0113 20:44:08.150359 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:08.150399 kubelet[2663]: E0113 20:44:08.150379 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:08.150488 kubelet[2663]: E0113 20:44:08.150464 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qkvcn" podUID="acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83" Jan 13 20:44:08.165641 containerd[1485]: time="2025-01-13T20:44:08.165588702Z" level=error msg="Failed to destroy network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.166348 containerd[1485]: time="2025-01-13T20:44:08.166308722Z" level=error msg="encountered an error cleaning up failed sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.166446 containerd[1485]: time="2025-01-13T20:44:08.166370149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.166808 kubelet[2663]: E0113 20:44:08.166775 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.167317 kubelet[2663]: E0113 20:44:08.166996 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:08.167317 kubelet[2663]: E0113 20:44:08.167035 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:08.167317 kubelet[2663]: E0113 20:44:08.167097 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" podUID="0d99061e-8895-489c-be69-03c406284aa9" Jan 13 20:44:08.180911 containerd[1485]: time="2025-01-13T20:44:08.180852518Z" level=error msg="Failed to destroy network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.181240 containerd[1485]: time="2025-01-13T20:44:08.181197876Z" level=error msg="encountered an error cleaning up failed sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.181375 containerd[1485]: time="2025-01-13T20:44:08.181249214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.181542 kubelet[2663]: E0113 20:44:08.181516 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.181597 kubelet[2663]: E0113 20:44:08.181568 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:08.181597 kubelet[2663]: E0113 20:44:08.181594 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:08.181707 kubelet[2663]: E0113 20:44:08.181648 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" podUID="5379735e-d3c8-481c-ba97-384db8752ee4" Jan 13 20:44:08.201596 containerd[1485]: time="2025-01-13T20:44:08.201523424Z" level=error msg="Failed to destroy network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.202087 containerd[1485]: time="2025-01-13T20:44:08.202048574Z" level=error msg="encountered an error cleaning up failed sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.202143 containerd[1485]: time="2025-01-13T20:44:08.202116613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.202479 kubelet[2663]: E0113 20:44:08.202447 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.202537 kubelet[2663]: E0113 20:44:08.202518 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:08.202579 kubelet[2663]: E0113 20:44:08.202545 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:08.202637 kubelet[2663]: E0113 20:44:08.202621 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-48828" podUID="10adf362-b95c-4368-9f5a-3041f3e43b8c" Jan 13 20:44:08.221069 systemd[1]: Created slice kubepods-besteffort-pod0740e80e_5301_4176_ac9e_7bf36ee863df.slice - libcontainer container kubepods-besteffort-pod0740e80e_5301_4176_ac9e_7bf36ee863df.slice. Jan 13 20:44:08.224229 containerd[1485]: time="2025-01-13T20:44:08.224150471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:0,}" Jan 13 20:44:08.291616 containerd[1485]: time="2025-01-13T20:44:08.291480296Z" level=error msg="Failed to destroy network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.291966 containerd[1485]: time="2025-01-13T20:44:08.291882441Z" level=error msg="encountered an error cleaning up failed sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.291966 containerd[1485]: time="2025-01-13T20:44:08.291938468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.292227 kubelet[2663]: E0113 20:44:08.292176 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:08.292227 kubelet[2663]: E0113 20:44:08.292246 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:08.292227 kubelet[2663]: E0113 20:44:08.292266 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:08.292536 kubelet[2663]: E0113 20:44:08.292326 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:08.607343 kubelet[2663]: I0113 20:44:08.607230 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b" Jan 13 20:44:08.607807 containerd[1485]: time="2025-01-13T20:44:08.607754006Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:08.608007 containerd[1485]: time="2025-01-13T20:44:08.607981148Z" level=info msg="Ensure that sandbox 814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b in task-service has been cleanup successfully" Jan 13 20:44:08.608363 containerd[1485]: time="2025-01-13T20:44:08.608341764Z" level=info msg="TearDown network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" successfully" Jan 13 20:44:08.608434 containerd[1485]: time="2025-01-13T20:44:08.608361802Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" returns successfully" Jan 13 20:44:08.608596 kubelet[2663]: E0113 20:44:08.608578 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:08.608930 containerd[1485]: time="2025-01-13T20:44:08.608903724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:1,}" Jan 13 20:44:08.610026 kubelet[2663]: E0113 20:44:08.609726 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:08.610302 containerd[1485]: time="2025-01-13T20:44:08.610272788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:44:08.610887 kubelet[2663]: I0113 20:44:08.610870 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c" Jan 13 20:44:08.611656 containerd[1485]: time="2025-01-13T20:44:08.611309191Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:08.611656 containerd[1485]: time="2025-01-13T20:44:08.611519922Z" level=info msg="Ensure that sandbox fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c in task-service has been cleanup successfully" Jan 13 20:44:08.611768 kubelet[2663]: I0113 20:44:08.611718 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2" Jan 13 20:44:08.611874 containerd[1485]: time="2025-01-13T20:44:08.611850501Z" level=info msg="TearDown network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" successfully" Jan 13 20:44:08.611874 containerd[1485]: time="2025-01-13T20:44:08.611870419Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" returns successfully" Jan 13 20:44:08.612600 containerd[1485]: time="2025-01-13T20:44:08.612238369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:1,}" Jan 13 20:44:08.612600 containerd[1485]: time="2025-01-13T20:44:08.612266362Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:08.612600 containerd[1485]: time="2025-01-13T20:44:08.612468467Z" level=info msg="Ensure that sandbox ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2 in task-service has been cleanup successfully" Jan 13 20:44:08.612706 kubelet[2663]: I0113 20:44:08.612670 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf" Jan 13 20:44:08.612819 containerd[1485]: time="2025-01-13T20:44:08.612783467Z" level=info msg="TearDown network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" successfully" Jan 13 20:44:08.612819 containerd[1485]: time="2025-01-13T20:44:08.612797774Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" returns successfully" Jan 13 20:44:08.613035 containerd[1485]: time="2025-01-13T20:44:08.612993376Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:08.613375 containerd[1485]: time="2025-01-13T20:44:08.613218835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:1,}" Jan 13 20:44:08.613375 containerd[1485]: time="2025-01-13T20:44:08.613247470Z" level=info msg="Ensure that sandbox 425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf in task-service has been cleanup successfully" Jan 13 20:44:08.613464 containerd[1485]: time="2025-01-13T20:44:08.613436640Z" level=info msg="TearDown network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" successfully" Jan 13 20:44:08.613464 containerd[1485]: time="2025-01-13T20:44:08.613452038Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" returns successfully" Jan 13 20:44:08.613821 containerd[1485]: time="2025-01-13T20:44:08.613789120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:44:08.614155 kubelet[2663]: I0113 20:44:08.614079 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5" Jan 13 20:44:08.614550 containerd[1485]: time="2025-01-13T20:44:08.614523367Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:08.614727 containerd[1485]: time="2025-01-13T20:44:08.614703620Z" level=info msg="Ensure that sandbox d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5 in task-service has been cleanup successfully" Jan 13 20:44:08.615004 containerd[1485]: time="2025-01-13T20:44:08.614970809Z" level=info msg="TearDown network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" successfully" Jan 13 20:44:08.615004 containerd[1485]: time="2025-01-13T20:44:08.614990947Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" returns successfully" Jan 13 20:44:08.615175 kubelet[2663]: E0113 20:44:08.615158 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:08.615434 containerd[1485]: time="2025-01-13T20:44:08.615402310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:1,}" Jan 13 20:44:08.615490 kubelet[2663]: I0113 20:44:08.615468 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b" Jan 13 20:44:08.615783 containerd[1485]: time="2025-01-13T20:44:08.615761093Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:08.615951 containerd[1485]: time="2025-01-13T20:44:08.615933641Z" level=info msg="Ensure that sandbox 3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b in task-service has been cleanup successfully" Jan 13 20:44:08.616100 containerd[1485]: time="2025-01-13T20:44:08.616072105Z" level=info msg="TearDown network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" successfully" Jan 13 20:44:08.616100 containerd[1485]: time="2025-01-13T20:44:08.616087163Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" returns successfully" Jan 13 20:44:08.619032 containerd[1485]: time="2025-01-13T20:44:08.616701372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:44:08.672423 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b-shm.mount: Deactivated successfully. Jan 13 20:44:09.634349 containerd[1485]: time="2025-01-13T20:44:09.634294448Z" level=error msg="Failed to destroy network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.634926 containerd[1485]: time="2025-01-13T20:44:09.634704708Z" level=error msg="encountered an error cleaning up failed sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.634926 containerd[1485]: time="2025-01-13T20:44:09.634766306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.635134 kubelet[2663]: E0113 20:44:09.635030 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.635134 kubelet[2663]: E0113 20:44:09.635084 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:09.635134 kubelet[2663]: E0113 20:44:09.635103 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:09.637391 kubelet[2663]: E0113 20:44:09.635164 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-48828" podUID="10adf362-b95c-4368-9f5a-3041f3e43b8c" Jan 13 20:44:09.655167 containerd[1485]: time="2025-01-13T20:44:09.655121283Z" level=error msg="Failed to destroy network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.657207 containerd[1485]: time="2025-01-13T20:44:09.657153108Z" level=error msg="encountered an error cleaning up failed sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.657362 containerd[1485]: time="2025-01-13T20:44:09.657232288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.657497 kubelet[2663]: E0113 20:44:09.657456 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.657564 kubelet[2663]: E0113 20:44:09.657509 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:09.657564 kubelet[2663]: E0113 20:44:09.657529 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:09.657641 kubelet[2663]: E0113 20:44:09.657598 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z9f87" podUID="24356970-d4ba-4d53-b9ce-72c96ee695b4" Jan 13 20:44:09.658138 containerd[1485]: time="2025-01-13T20:44:09.658098075Z" level=error msg="Failed to destroy network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.658656 containerd[1485]: time="2025-01-13T20:44:09.658424466Z" level=error msg="encountered an error cleaning up failed sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.658656 containerd[1485]: time="2025-01-13T20:44:09.658488798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.658755 kubelet[2663]: E0113 20:44:09.658666 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.658755 kubelet[2663]: E0113 20:44:09.658713 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:09.658755 kubelet[2663]: E0113 20:44:09.658733 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:09.658864 kubelet[2663]: E0113 20:44:09.658783 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:09.669469 containerd[1485]: time="2025-01-13T20:44:09.669086597Z" level=error msg="Failed to destroy network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.669625 containerd[1485]: time="2025-01-13T20:44:09.669592579Z" level=error msg="encountered an error cleaning up failed sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.669681 containerd[1485]: time="2025-01-13T20:44:09.669652302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.669942 kubelet[2663]: E0113 20:44:09.669916 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.670031 kubelet[2663]: E0113 20:44:09.669980 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:09.670067 kubelet[2663]: E0113 20:44:09.670030 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:09.670110 kubelet[2663]: E0113 20:44:09.670101 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" podUID="5379735e-d3c8-481c-ba97-384db8752ee4" Jan 13 20:44:09.671243 containerd[1485]: time="2025-01-13T20:44:09.671050742Z" level=error msg="Failed to destroy network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.671643 containerd[1485]: time="2025-01-13T20:44:09.671612642Z" level=error msg="encountered an error cleaning up failed sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.671715 containerd[1485]: time="2025-01-13T20:44:09.671663408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.672380 kubelet[2663]: E0113 20:44:09.672317 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.672454 kubelet[2663]: E0113 20:44:09.672416 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:09.672498 kubelet[2663]: E0113 20:44:09.672482 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:09.672759 kubelet[2663]: E0113 20:44:09.672582 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" podUID="0d99061e-8895-489c-be69-03c406284aa9" Jan 13 20:44:09.673723 containerd[1485]: time="2025-01-13T20:44:09.673627063Z" level=error msg="Failed to destroy network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.674047 containerd[1485]: time="2025-01-13T20:44:09.674006896Z" level=error msg="encountered an error cleaning up failed sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.674195 containerd[1485]: time="2025-01-13T20:44:09.674132865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.674452 kubelet[2663]: E0113 20:44:09.674421 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:09.674497 kubelet[2663]: E0113 20:44:09.674490 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:09.674534 kubelet[2663]: E0113 20:44:09.674514 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:09.674588 kubelet[2663]: E0113 20:44:09.674577 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qkvcn" podUID="acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83" Jan 13 20:44:09.678037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98-shm.mount: Deactivated successfully. Jan 13 20:44:09.678524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4-shm.mount: Deactivated successfully. Jan 13 20:44:09.682562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0-shm.mount: Deactivated successfully. Jan 13 20:44:09.682683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e-shm.mount: Deactivated successfully. Jan 13 20:44:10.352440 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:37174.service - OpenSSH per-connection server daemon (10.0.0.1:37174). Jan 13 20:44:10.399105 sshd[3921]: Accepted publickey for core from 10.0.0.1 port 37174 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:10.401070 sshd-session[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:10.405566 systemd-logind[1473]: New session 9 of user core. Jan 13 20:44:10.415265 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:44:10.561211 sshd[3923]: Connection closed by 10.0.0.1 port 37174 Jan 13 20:44:10.561624 sshd-session[3921]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:10.565093 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:37174.service: Deactivated successfully. Jan 13 20:44:10.567300 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:44:10.569128 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:44:10.570284 systemd-logind[1473]: Removed session 9. Jan 13 20:44:10.621377 kubelet[2663]: I0113 20:44:10.621342 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98" Jan 13 20:44:10.622386 containerd[1485]: time="2025-01-13T20:44:10.622345415Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" Jan 13 20:44:10.623773 containerd[1485]: time="2025-01-13T20:44:10.622756837Z" level=info msg="Ensure that sandbox 8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98 in task-service has been cleanup successfully" Jan 13 20:44:10.623828 kubelet[2663]: I0113 20:44:10.623699 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983" Jan 13 20:44:10.624217 containerd[1485]: time="2025-01-13T20:44:10.624140188Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" Jan 13 20:44:10.624395 containerd[1485]: time="2025-01-13T20:44:10.624339567Z" level=info msg="Ensure that sandbox eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983 in task-service has been cleanup successfully" Jan 13 20:44:10.629056 containerd[1485]: time="2025-01-13T20:44:10.624605883Z" level=info msg="TearDown network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" successfully" Jan 13 20:44:10.626588 systemd[1]: run-netns-cni\x2d055d1a56\x2d2b8c\x2d4d72\x2d82c9\x2da5229da02cb6.mount: Deactivated successfully. Jan 13 20:44:10.629176 kubelet[2663]: I0113 20:44:10.626334 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4" Jan 13 20:44:10.626700 systemd[1]: run-netns-cni\x2d57acf8a4\x2dfb70\x2d0e10\x2d8181\x2dc314cfce2774.mount: Deactivated successfully. Jan 13 20:44:10.629333 containerd[1485]: time="2025-01-13T20:44:10.629291913Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" returns successfully" Jan 13 20:44:10.629333 containerd[1485]: time="2025-01-13T20:44:10.629242298Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" Jan 13 20:44:10.629542 containerd[1485]: time="2025-01-13T20:44:10.629521750Z" level=info msg="Ensure that sandbox a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4 in task-service has been cleanup successfully" Jan 13 20:44:10.629813 containerd[1485]: time="2025-01-13T20:44:10.625253414Z" level=info msg="TearDown network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" successfully" Jan 13 20:44:10.629841 containerd[1485]: time="2025-01-13T20:44:10.629807553Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" returns successfully" Jan 13 20:44:10.630360 containerd[1485]: time="2025-01-13T20:44:10.630240156Z" level=info msg="TearDown network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" successfully" Jan 13 20:44:10.630360 containerd[1485]: time="2025-01-13T20:44:10.630260414Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" returns successfully" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.630747460Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.630855316Z" level=info msg="TearDown network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" successfully" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.630870304Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" returns successfully" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.630930408Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.631061086Z" level=info msg="TearDown network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" successfully" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.631175905Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" returns successfully" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.632112335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:2,}" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.632619229Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.632739118Z" level=info msg="TearDown network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" successfully" Jan 13 20:44:10.635123 containerd[1485]: time="2025-01-13T20:44:10.632780656Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" returns successfully" Jan 13 20:44:10.635563 systemd[1]: run-netns-cni\x2d5d0bee3e\x2d47f1\x2df6a2\x2dd568\x2d814fe1377d37.mount: Deactivated successfully. Jan 13 20:44:10.636117 containerd[1485]: time="2025-01-13T20:44:10.636073848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:2,}" Jan 13 20:44:10.637321 containerd[1485]: time="2025-01-13T20:44:10.637295911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:44:10.637391 kubelet[2663]: I0113 20:44:10.637368 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e" Jan 13 20:44:10.638618 containerd[1485]: time="2025-01-13T20:44:10.638038683Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" Jan 13 20:44:10.639408 containerd[1485]: time="2025-01-13T20:44:10.639386236Z" level=info msg="Ensure that sandbox 1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e in task-service has been cleanup successfully" Jan 13 20:44:10.641799 containerd[1485]: time="2025-01-13T20:44:10.641761953Z" level=info msg="TearDown network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" successfully" Jan 13 20:44:10.641799 containerd[1485]: time="2025-01-13T20:44:10.641789856Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" returns successfully" Jan 13 20:44:10.642470 containerd[1485]: time="2025-01-13T20:44:10.642448849Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:10.642544 containerd[1485]: time="2025-01-13T20:44:10.642528119Z" level=info msg="TearDown network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" successfully" Jan 13 20:44:10.642544 containerd[1485]: time="2025-01-13T20:44:10.642541705Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" returns successfully" Jan 13 20:44:10.642713 kubelet[2663]: E0113 20:44:10.642684 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:10.643009 containerd[1485]: time="2025-01-13T20:44:10.642984116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:2,}" Jan 13 20:44:10.643592 kubelet[2663]: I0113 20:44:10.643562 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0" Jan 13 20:44:10.645929 containerd[1485]: time="2025-01-13T20:44:10.645890062Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" Jan 13 20:44:10.646383 systemd[1]: run-netns-cni\x2d7a21301a\x2dbbf1\x2d43bb\x2d9a52\x2dec2e080576a7.mount: Deactivated successfully. Jan 13 20:44:10.648377 containerd[1485]: time="2025-01-13T20:44:10.648195706Z" level=info msg="Ensure that sandbox a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0 in task-service has been cleanup successfully" Jan 13 20:44:10.650389 containerd[1485]: time="2025-01-13T20:44:10.650357045Z" level=info msg="TearDown network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" successfully" Jan 13 20:44:10.650389 containerd[1485]: time="2025-01-13T20:44:10.650382553Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" returns successfully" Jan 13 20:44:10.651602 containerd[1485]: time="2025-01-13T20:44:10.650966704Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:10.651871 containerd[1485]: time="2025-01-13T20:44:10.651776283Z" level=info msg="TearDown network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" successfully" Jan 13 20:44:10.651871 containerd[1485]: time="2025-01-13T20:44:10.651795210Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" returns successfully" Jan 13 20:44:10.655650 containerd[1485]: time="2025-01-13T20:44:10.655626053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:44:10.662330 kubelet[2663]: I0113 20:44:10.662295 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366" Jan 13 20:44:10.662708 containerd[1485]: time="2025-01-13T20:44:10.662672812Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" Jan 13 20:44:10.663129 containerd[1485]: time="2025-01-13T20:44:10.662907628Z" level=info msg="Ensure that sandbox e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366 in task-service has been cleanup successfully" Jan 13 20:44:10.663278 containerd[1485]: time="2025-01-13T20:44:10.663245741Z" level=info msg="TearDown network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" successfully" Jan 13 20:44:10.663309 containerd[1485]: time="2025-01-13T20:44:10.663277011Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" returns successfully" Jan 13 20:44:10.663611 containerd[1485]: time="2025-01-13T20:44:10.663578584Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:10.663717 containerd[1485]: time="2025-01-13T20:44:10.663697410Z" level=info msg="TearDown network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" successfully" Jan 13 20:44:10.663745 containerd[1485]: time="2025-01-13T20:44:10.663715935Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" returns successfully" Jan 13 20:44:10.664009 kubelet[2663]: E0113 20:44:10.663977 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:10.665462 containerd[1485]: time="2025-01-13T20:44:10.664926466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:2,}" Jan 13 20:44:10.673039 systemd[1]: run-netns-cni\x2d9cb83737\x2d261d\x2da638\x2d7a36\x2d04776cf3abd0.mount: Deactivated successfully. Jan 13 20:44:10.673144 systemd[1]: run-netns-cni\x2dc58c840d\x2d6396\x2dee64\x2d8666\x2d1b4d4871059b.mount: Deactivated successfully. Jan 13 20:44:11.305991 containerd[1485]: time="2025-01-13T20:44:11.305794196Z" level=error msg="Failed to destroy network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.318125 containerd[1485]: time="2025-01-13T20:44:11.318079345Z" level=error msg="encountered an error cleaning up failed sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.318390 containerd[1485]: time="2025-01-13T20:44:11.318371360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.341070 kubelet[2663]: E0113 20:44:11.341034 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.341253 kubelet[2663]: E0113 20:44:11.341096 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:11.341253 kubelet[2663]: E0113 20:44:11.341118 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:11.341253 kubelet[2663]: E0113 20:44:11.341165 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qkvcn" podUID="acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83" Jan 13 20:44:11.351677 containerd[1485]: time="2025-01-13T20:44:11.350878193Z" level=error msg="Failed to destroy network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.352139 containerd[1485]: time="2025-01-13T20:44:11.352114533Z" level=error msg="encountered an error cleaning up failed sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.352289 containerd[1485]: time="2025-01-13T20:44:11.352253166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.352656 kubelet[2663]: E0113 20:44:11.352629 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.352855 kubelet[2663]: E0113 20:44:11.352844 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:11.352923 kubelet[2663]: E0113 20:44:11.352915 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:11.353110 kubelet[2663]: E0113 20:44:11.353097 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" podUID="0d99061e-8895-489c-be69-03c406284aa9" Jan 13 20:44:11.373791 containerd[1485]: time="2025-01-13T20:44:11.373734204Z" level=error msg="Failed to destroy network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.374839 containerd[1485]: time="2025-01-13T20:44:11.374813516Z" level=error msg="encountered an error cleaning up failed sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.374897 containerd[1485]: time="2025-01-13T20:44:11.374874100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.375168 kubelet[2663]: E0113 20:44:11.375145 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.375274 kubelet[2663]: E0113 20:44:11.375257 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:11.375316 kubelet[2663]: E0113 20:44:11.375288 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:11.375350 containerd[1485]: time="2025-01-13T20:44:11.375254273Z" level=error msg="Failed to destroy network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.375385 kubelet[2663]: E0113 20:44:11.375353 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-48828" podUID="10adf362-b95c-4368-9f5a-3041f3e43b8c" Jan 13 20:44:11.375776 containerd[1485]: time="2025-01-13T20:44:11.375737992Z" level=error msg="encountered an error cleaning up failed sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.375845 containerd[1485]: time="2025-01-13T20:44:11.375812123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.377393 kubelet[2663]: E0113 20:44:11.377366 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.377456 kubelet[2663]: E0113 20:44:11.377422 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:11.377456 kubelet[2663]: E0113 20:44:11.377455 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:11.377644 kubelet[2663]: E0113 20:44:11.377504 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" podUID="5379735e-d3c8-481c-ba97-384db8752ee4" Jan 13 20:44:11.384967 containerd[1485]: time="2025-01-13T20:44:11.384905006Z" level=error msg="Failed to destroy network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.385397 containerd[1485]: time="2025-01-13T20:44:11.385301098Z" level=error msg="encountered an error cleaning up failed sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.385397 containerd[1485]: time="2025-01-13T20:44:11.385354900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.385613 kubelet[2663]: E0113 20:44:11.385591 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.385659 kubelet[2663]: E0113 20:44:11.385642 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:11.385685 kubelet[2663]: E0113 20:44:11.385663 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:11.385730 kubelet[2663]: E0113 20:44:11.385712 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:11.411697 containerd[1485]: time="2025-01-13T20:44:11.411638496Z" level=error msg="Failed to destroy network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.412457 containerd[1485]: time="2025-01-13T20:44:11.412310513Z" level=error msg="encountered an error cleaning up failed sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.412457 containerd[1485]: time="2025-01-13T20:44:11.412380296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.412755 kubelet[2663]: E0113 20:44:11.412686 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.412807 kubelet[2663]: E0113 20:44:11.412775 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:11.412807 kubelet[2663]: E0113 20:44:11.412800 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:11.412991 kubelet[2663]: E0113 20:44:11.412974 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z9f87" podUID="24356970-d4ba-4d53-b9ce-72c96ee695b4" Jan 13 20:44:11.666270 kubelet[2663]: I0113 20:44:11.666233 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7" Jan 13 20:44:11.666910 containerd[1485]: time="2025-01-13T20:44:11.666753131Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\"" Jan 13 20:44:11.667215 containerd[1485]: time="2025-01-13T20:44:11.666953221Z" level=info msg="Ensure that sandbox 927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7 in task-service has been cleanup successfully" Jan 13 20:44:11.667252 containerd[1485]: time="2025-01-13T20:44:11.667215340Z" level=info msg="TearDown network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" successfully" Jan 13 20:44:11.667252 containerd[1485]: time="2025-01-13T20:44:11.667228615Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" returns successfully" Jan 13 20:44:11.667602 containerd[1485]: time="2025-01-13T20:44:11.667575364Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" Jan 13 20:44:11.667780 containerd[1485]: time="2025-01-13T20:44:11.667656438Z" level=info msg="TearDown network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" successfully" Jan 13 20:44:11.667780 containerd[1485]: time="2025-01-13T20:44:11.667670445Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" returns successfully" Jan 13 20:44:11.668172 containerd[1485]: time="2025-01-13T20:44:11.668141220Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:11.668335 containerd[1485]: time="2025-01-13T20:44:11.668312535Z" level=info msg="TearDown network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" successfully" Jan 13 20:44:11.668482 containerd[1485]: time="2025-01-13T20:44:11.668403789Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" returns successfully" Jan 13 20:44:11.669191 containerd[1485]: time="2025-01-13T20:44:11.669161679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:44:11.669405 kubelet[2663]: I0113 20:44:11.669376 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f" Jan 13 20:44:11.670003 containerd[1485]: time="2025-01-13T20:44:11.669980075Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\"" Jan 13 20:44:11.670179 containerd[1485]: time="2025-01-13T20:44:11.670160879Z" level=info msg="Ensure that sandbox 242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f in task-service has been cleanup successfully" Jan 13 20:44:11.670540 containerd[1485]: time="2025-01-13T20:44:11.670453665Z" level=info msg="TearDown network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" successfully" Jan 13 20:44:11.670540 containerd[1485]: time="2025-01-13T20:44:11.670472371Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" returns successfully" Jan 13 20:44:11.671109 containerd[1485]: time="2025-01-13T20:44:11.671087189Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" Jan 13 20:44:11.671185 containerd[1485]: time="2025-01-13T20:44:11.671159356Z" level=info msg="TearDown network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" successfully" Jan 13 20:44:11.671185 containerd[1485]: time="2025-01-13T20:44:11.671174686Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" returns successfully" Jan 13 20:44:11.671528 containerd[1485]: time="2025-01-13T20:44:11.671479154Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:11.671846 containerd[1485]: time="2025-01-13T20:44:11.671565418Z" level=info msg="TearDown network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" successfully" Jan 13 20:44:11.671846 containerd[1485]: time="2025-01-13T20:44:11.671576449Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" returns successfully" Jan 13 20:44:11.671933 kubelet[2663]: E0113 20:44:11.671735 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:11.672467 containerd[1485]: time="2025-01-13T20:44:11.672401697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:3,}" Jan 13 20:44:11.672628 kubelet[2663]: I0113 20:44:11.672439 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8" Jan 13 20:44:11.673286 containerd[1485]: time="2025-01-13T20:44:11.673037677Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\"" Jan 13 20:44:11.673286 containerd[1485]: time="2025-01-13T20:44:11.673215744Z" level=info msg="Ensure that sandbox 7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8 in task-service has been cleanup successfully" Jan 13 20:44:11.673706 containerd[1485]: time="2025-01-13T20:44:11.673651573Z" level=info msg="TearDown network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" successfully" Jan 13 20:44:11.673706 containerd[1485]: time="2025-01-13T20:44:11.673669217Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" returns successfully" Jan 13 20:44:11.674107 containerd[1485]: time="2025-01-13T20:44:11.673971281Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" Jan 13 20:44:11.674107 containerd[1485]: time="2025-01-13T20:44:11.674065239Z" level=info msg="TearDown network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" successfully" Jan 13 20:44:11.674107 containerd[1485]: time="2025-01-13T20:44:11.674075438Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" returns successfully" Jan 13 20:44:11.674370 containerd[1485]: time="2025-01-13T20:44:11.674321005Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:11.674408 containerd[1485]: time="2025-01-13T20:44:11.674393793Z" level=info msg="TearDown network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" successfully" Jan 13 20:44:11.674408 containerd[1485]: time="2025-01-13T20:44:11.674403222Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" returns successfully" Jan 13 20:44:11.675680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8-shm.mount: Deactivated successfully. Jan 13 20:44:11.676285 systemd[1]: run-netns-cni\x2ddf57c111\x2dd56d\x2d71b0\x2d1c3f\x2dd6f4073fbba5.mount: Deactivated successfully. Jan 13 20:44:11.676467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7-shm.mount: Deactivated successfully. Jan 13 20:44:11.676859 containerd[1485]: time="2025-01-13T20:44:11.676496750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:44:11.677328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd-shm.mount: Deactivated successfully. Jan 13 20:44:11.677976 systemd[1]: run-netns-cni\x2d788d1d34\x2d9d9c\x2d693c\x2ded36\x2d220261e407e1.mount: Deactivated successfully. Jan 13 20:44:11.678097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f-shm.mount: Deactivated successfully. Jan 13 20:44:11.684117 kubelet[2663]: I0113 20:44:11.682991 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44" Jan 13 20:44:11.684218 containerd[1485]: time="2025-01-13T20:44:11.684068562Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\"" Jan 13 20:44:11.684265 containerd[1485]: time="2025-01-13T20:44:11.684253122Z" level=info msg="Ensure that sandbox fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44 in task-service has been cleanup successfully" Jan 13 20:44:11.687298 containerd[1485]: time="2025-01-13T20:44:11.686535941Z" level=info msg="TearDown network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" successfully" Jan 13 20:44:11.687298 containerd[1485]: time="2025-01-13T20:44:11.686556921Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" returns successfully" Jan 13 20:44:11.687613 containerd[1485]: time="2025-01-13T20:44:11.687586909Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" Jan 13 20:44:11.687687 containerd[1485]: time="2025-01-13T20:44:11.687664837Z" level=info msg="TearDown network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" successfully" Jan 13 20:44:11.687687 containerd[1485]: time="2025-01-13T20:44:11.687681137Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" returns successfully" Jan 13 20:44:11.687834 systemd[1]: run-netns-cni\x2ddc9cd2e0\x2d49c1\x2d8858\x2d8ee6\x2d66a564f0cb6c.mount: Deactivated successfully. Jan 13 20:44:11.687935 systemd[1]: run-netns-cni\x2d70adac23\x2d0cd8\x2d6000\x2dedaf\x2d0f2ba433bde4.mount: Deactivated successfully. Jan 13 20:44:11.688358 kubelet[2663]: I0113 20:44:11.688263 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1" Jan 13 20:44:11.690109 containerd[1485]: time="2025-01-13T20:44:11.688633748Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\"" Jan 13 20:44:11.690109 containerd[1485]: time="2025-01-13T20:44:11.688772842Z" level=info msg="Ensure that sandbox 79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1 in task-service has been cleanup successfully" Jan 13 20:44:11.690109 containerd[1485]: time="2025-01-13T20:44:11.689028188Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:11.690109 containerd[1485]: time="2025-01-13T20:44:11.689101007Z" level=info msg="TearDown network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" successfully" Jan 13 20:44:11.690109 containerd[1485]: time="2025-01-13T20:44:11.689110224Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" returns successfully" Jan 13 20:44:11.690251 kubelet[2663]: E0113 20:44:11.690202 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:11.692366 containerd[1485]: time="2025-01-13T20:44:11.690518380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:3,}" Jan 13 20:44:11.692366 containerd[1485]: time="2025-01-13T20:44:11.690784987Z" level=info msg="TearDown network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" successfully" Jan 13 20:44:11.692366 containerd[1485]: time="2025-01-13T20:44:11.690799245Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" returns successfully" Jan 13 20:44:11.694170 containerd[1485]: time="2025-01-13T20:44:11.692688184Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" Jan 13 20:44:11.694170 containerd[1485]: time="2025-01-13T20:44:11.692763007Z" level=info msg="TearDown network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" successfully" Jan 13 20:44:11.694170 containerd[1485]: time="2025-01-13T20:44:11.692772314Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" returns successfully" Jan 13 20:44:11.693653 systemd[1]: run-netns-cni\x2da9905781\x2d5482\x2d2e30\x2d6181\x2d48441dc61e1e.mount: Deactivated successfully. Jan 13 20:44:11.696169 containerd[1485]: time="2025-01-13T20:44:11.695857618Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:11.696169 containerd[1485]: time="2025-01-13T20:44:11.695938271Z" level=info msg="TearDown network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" successfully" Jan 13 20:44:11.696169 containerd[1485]: time="2025-01-13T20:44:11.695947209Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" returns successfully" Jan 13 20:44:11.697506 containerd[1485]: time="2025-01-13T20:44:11.697428154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:3,}" Jan 13 20:44:11.697846 kubelet[2663]: I0113 20:44:11.697816 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd" Jan 13 20:44:11.698947 containerd[1485]: time="2025-01-13T20:44:11.698918245Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\"" Jan 13 20:44:11.699715 containerd[1485]: time="2025-01-13T20:44:11.699096924Z" level=info msg="Ensure that sandbox 446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd in task-service has been cleanup successfully" Jan 13 20:44:11.699715 containerd[1485]: time="2025-01-13T20:44:11.699442431Z" level=info msg="TearDown network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" successfully" Jan 13 20:44:11.699715 containerd[1485]: time="2025-01-13T20:44:11.699455006Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" returns successfully" Jan 13 20:44:11.700210 containerd[1485]: time="2025-01-13T20:44:11.700180544Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" Jan 13 20:44:11.701067 containerd[1485]: time="2025-01-13T20:44:11.700253192Z" level=info msg="TearDown network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" successfully" Jan 13 20:44:11.701067 containerd[1485]: time="2025-01-13T20:44:11.700263952Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" returns successfully" Jan 13 20:44:11.701067 containerd[1485]: time="2025-01-13T20:44:11.700629077Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:11.701067 containerd[1485]: time="2025-01-13T20:44:11.700701054Z" level=info msg="TearDown network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" successfully" Jan 13 20:44:11.701067 containerd[1485]: time="2025-01-13T20:44:11.700709710Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" returns successfully" Jan 13 20:44:11.701190 containerd[1485]: time="2025-01-13T20:44:11.701142152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:3,}" Jan 13 20:44:11.702834 systemd[1]: run-netns-cni\x2d788f9389\x2d8ecc\x2d2484\x2d5546\x2d511fce5e6e59.mount: Deactivated successfully. Jan 13 20:44:11.807969 containerd[1485]: time="2025-01-13T20:44:11.807079639Z" level=error msg="Failed to destroy network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.807969 containerd[1485]: time="2025-01-13T20:44:11.807848951Z" level=error msg="encountered an error cleaning up failed sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.807969 containerd[1485]: time="2025-01-13T20:44:11.807900639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.808356 kubelet[2663]: E0113 20:44:11.808203 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.808356 kubelet[2663]: E0113 20:44:11.808265 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:11.808356 kubelet[2663]: E0113 20:44:11.808285 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:11.808468 kubelet[2663]: E0113 20:44:11.808341 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-48828" podUID="10adf362-b95c-4368-9f5a-3041f3e43b8c" Jan 13 20:44:11.824491 containerd[1485]: time="2025-01-13T20:44:11.824088977Z" level=error msg="Failed to destroy network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.825092 containerd[1485]: time="2025-01-13T20:44:11.825070944Z" level=error msg="encountered an error cleaning up failed sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.825531 containerd[1485]: time="2025-01-13T20:44:11.825511901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.825821 kubelet[2663]: E0113 20:44:11.825798 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.826332 kubelet[2663]: E0113 20:44:11.825943 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:11.826332 kubelet[2663]: E0113 20:44:11.825966 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:11.826332 kubelet[2663]: E0113 20:44:11.826067 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qkvcn" podUID="acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83" Jan 13 20:44:11.841999 containerd[1485]: time="2025-01-13T20:44:11.841939665Z" level=error msg="Failed to destroy network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.842567 containerd[1485]: time="2025-01-13T20:44:11.842534355Z" level=error msg="encountered an error cleaning up failed sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.842683 containerd[1485]: time="2025-01-13T20:44:11.842612503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.843550 kubelet[2663]: E0113 20:44:11.843066 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.843550 kubelet[2663]: E0113 20:44:11.843130 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:11.843550 kubelet[2663]: E0113 20:44:11.843160 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:11.843739 kubelet[2663]: E0113 20:44:11.843267 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z9f87" podUID="24356970-d4ba-4d53-b9ce-72c96ee695b4" Jan 13 20:44:11.843961 containerd[1485]: time="2025-01-13T20:44:11.843910912Z" level=error msg="Failed to destroy network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.844429 containerd[1485]: time="2025-01-13T20:44:11.844383991Z" level=error msg="encountered an error cleaning up failed sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.844587 containerd[1485]: time="2025-01-13T20:44:11.844486184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.844819 kubelet[2663]: E0113 20:44:11.844711 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.844956 kubelet[2663]: E0113 20:44:11.844942 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:11.845074 kubelet[2663]: E0113 20:44:11.845061 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:11.845381 kubelet[2663]: E0113 20:44:11.845350 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" podUID="0d99061e-8895-489c-be69-03c406284aa9" Jan 13 20:44:11.849650 containerd[1485]: time="2025-01-13T20:44:11.849618028Z" level=error msg="Failed to destroy network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.849978 containerd[1485]: time="2025-01-13T20:44:11.849937415Z" level=error msg="encountered an error cleaning up failed sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.850456 containerd[1485]: time="2025-01-13T20:44:11.849983674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.850500 kubelet[2663]: E0113 20:44:11.850172 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.850500 kubelet[2663]: E0113 20:44:11.850268 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:11.850500 kubelet[2663]: E0113 20:44:11.850290 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:11.850577 kubelet[2663]: E0113 20:44:11.850338 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:11.866042 containerd[1485]: time="2025-01-13T20:44:11.865611877Z" level=error msg="Failed to destroy network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.866042 containerd[1485]: time="2025-01-13T20:44:11.865993261Z" level=error msg="encountered an error cleaning up failed sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.866225 containerd[1485]: time="2025-01-13T20:44:11.866207218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.866508 kubelet[2663]: E0113 20:44:11.866477 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:11.866573 kubelet[2663]: E0113 20:44:11.866528 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:11.866573 kubelet[2663]: E0113 20:44:11.866549 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:11.866618 kubelet[2663]: E0113 20:44:11.866596 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" podUID="5379735e-d3c8-481c-ba97-384db8752ee4" Jan 13 20:44:12.676561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9-shm.mount: Deactivated successfully. Jan 13 20:44:12.701621 kubelet[2663]: I0113 20:44:12.701509 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0" Jan 13 20:44:12.703254 containerd[1485]: time="2025-01-13T20:44:12.702609494Z" level=info msg="StopPodSandbox for \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\"" Jan 13 20:44:12.703254 containerd[1485]: time="2025-01-13T20:44:12.702800878Z" level=info msg="Ensure that sandbox fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0 in task-service has been cleanup successfully" Jan 13 20:44:12.706790 containerd[1485]: time="2025-01-13T20:44:12.705157415Z" level=info msg="TearDown network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" successfully" Jan 13 20:44:12.706790 containerd[1485]: time="2025-01-13T20:44:12.705187381Z" level=info msg="StopPodSandbox for \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" returns successfully" Jan 13 20:44:12.706790 containerd[1485]: time="2025-01-13T20:44:12.706141725Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\"" Jan 13 20:44:12.705412 systemd[1]: run-netns-cni\x2da440ff5b\x2dea92\x2d067a\x2d37a9\x2d80dd82a61aa6.mount: Deactivated successfully. Jan 13 20:44:12.707258 kubelet[2663]: I0113 20:44:12.707231 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81" Jan 13 20:44:12.709943 kubelet[2663]: I0113 20:44:12.709683 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9" Jan 13 20:44:12.714943 kubelet[2663]: I0113 20:44:12.714298 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0" Jan 13 20:44:12.718121 kubelet[2663]: I0113 20:44:12.718107 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f" Jan 13 20:44:12.720515 containerd[1485]: time="2025-01-13T20:44:12.706236175Z" level=info msg="TearDown network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" successfully" Jan 13 20:44:12.720581 containerd[1485]: time="2025-01-13T20:44:12.720511186Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" returns successfully" Jan 13 20:44:12.720581 containerd[1485]: time="2025-01-13T20:44:12.707691189Z" level=info msg="StopPodSandbox for \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\"" Jan 13 20:44:12.721100 containerd[1485]: time="2025-01-13T20:44:12.720739770Z" level=info msg="Ensure that sandbox 1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81 in task-service has been cleanup successfully" Jan 13 20:44:12.721100 containerd[1485]: time="2025-01-13T20:44:12.711195226Z" level=info msg="StopPodSandbox for \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\"" Jan 13 20:44:12.721100 containerd[1485]: time="2025-01-13T20:44:12.715205947Z" level=info msg="StopPodSandbox for \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\"" Jan 13 20:44:12.721100 containerd[1485]: time="2025-01-13T20:44:12.721058446Z" level=info msg="Ensure that sandbox 028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0 in task-service has been cleanup successfully" Jan 13 20:44:12.721401 containerd[1485]: time="2025-01-13T20:44:12.721137706Z" level=info msg="Ensure that sandbox 96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9 in task-service has been cleanup successfully" Jan 13 20:44:12.721401 containerd[1485]: time="2025-01-13T20:44:12.721199494Z" level=info msg="TearDown network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" successfully" Jan 13 20:44:12.721401 containerd[1485]: time="2025-01-13T20:44:12.721211497Z" level=info msg="StopPodSandbox for \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" returns successfully" Jan 13 20:44:12.721621 containerd[1485]: time="2025-01-13T20:44:12.718526135Z" level=info msg="StopPodSandbox for \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\"" Jan 13 20:44:12.721692 containerd[1485]: time="2025-01-13T20:44:12.721632787Z" level=info msg="Ensure that sandbox c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f in task-service has been cleanup successfully" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721780387Z" level=info msg="TearDown network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" successfully" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721793071Z" level=info msg="StopPodSandbox for \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" returns successfully" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721875397Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721881229Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\"" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721939800Z" level=info msg="TearDown network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" successfully" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721948667Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" returns successfully" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721966211Z" level=info msg="TearDown network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" successfully" Jan 13 20:44:12.722049 containerd[1485]: time="2025-01-13T20:44:12.721995055Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" returns successfully" Jan 13 20:44:12.722248 containerd[1485]: time="2025-01-13T20:44:12.722096728Z" level=info msg="TearDown network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" successfully" Jan 13 20:44:12.722248 containerd[1485]: time="2025-01-13T20:44:12.722106938Z" level=info msg="StopPodSandbox for \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" returns successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722725393Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722811396Z" level=info msg="TearDown network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722820353Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" returns successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722832286Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\"" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722902760Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722917437Z" level=info msg="TearDown network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722930002Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" returns successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722962524Z" level=info msg="TearDown network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.722971360Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" returns successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.723000957Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\"" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.723099443Z" level=info msg="TearDown network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" successfully" Jan 13 20:44:12.724056 containerd[1485]: time="2025-01-13T20:44:12.723111526Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" returns successfully" Jan 13 20:44:12.724460 containerd[1485]: time="2025-01-13T20:44:12.724360409Z" level=info msg="TearDown network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" successfully" Jan 13 20:44:12.724460 containerd[1485]: time="2025-01-13T20:44:12.724374797Z" level=info msg="StopPodSandbox for \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" returns successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.724976931Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725024882Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725068104Z" level=info msg="TearDown network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725079495Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" returns successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725113470Z" level=info msg="TearDown network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725124461Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" returns successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725159587Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725176780Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\"" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725225533Z" level=info msg="TearDown network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725237786Z" level=info msg="TearDown network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725247465Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" returns successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725238126Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" returns successfully" Jan 13 20:44:12.725349 containerd[1485]: time="2025-01-13T20:44:12.725326275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:4,}" Jan 13 20:44:12.725863 systemd[1]: run-netns-cni\x2d6c4a2607\x2df980\x2d730b\x2dafd5\x2d1c0614948df8.mount: Deactivated successfully. Jan 13 20:44:12.729159 kubelet[2663]: E0113 20:44:12.728315 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:12.729159 kubelet[2663]: I0113 20:44:12.728501 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.726627336Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.726696297Z" level=info msg="TearDown network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" successfully" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.726705966Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" returns successfully" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.728275788Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.728347344Z" level=info msg="TearDown network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" successfully" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.728355980Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" returns successfully" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.729108480Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.729177712Z" level=info msg="TearDown network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" successfully" Jan 13 20:44:12.729228 containerd[1485]: time="2025-01-13T20:44:12.729188152Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" returns successfully" Jan 13 20:44:12.725969 systemd[1]: run-netns-cni\x2d715b5578\x2d7a69\x2de706\x2d0b5c\x2db63db06b9c3f.mount: Deactivated successfully. Jan 13 20:44:12.729502 containerd[1485]: time="2025-01-13T20:44:12.729260750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:44:12.726092 systemd[1]: run-netns-cni\x2d3cc5d1fd\x2d9dff\x2dd313\x2d9077\x2d028470351a0b.mount: Deactivated successfully. Jan 13 20:44:12.726167 systemd[1]: run-netns-cni\x2d5d1d1638\x2dc24e\x2d01cd\x2db684\x2d7fb2b406b5ce.mount: Deactivated successfully. Jan 13 20:44:12.730929 containerd[1485]: time="2025-01-13T20:44:12.729741633Z" level=info msg="StopPodSandbox for \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\"" Jan 13 20:44:12.730929 containerd[1485]: time="2025-01-13T20:44:12.730740631Z" level=info msg="Ensure that sandbox 8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3 in task-service has been cleanup successfully" Jan 13 20:44:12.732191 containerd[1485]: time="2025-01-13T20:44:12.731296718Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:12.732191 containerd[1485]: time="2025-01-13T20:44:12.731391057Z" level=info msg="TearDown network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" successfully" Jan 13 20:44:12.732191 containerd[1485]: time="2025-01-13T20:44:12.731401818Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" returns successfully" Jan 13 20:44:12.732191 containerd[1485]: time="2025-01-13T20:44:12.731490196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:4,}" Jan 13 20:44:12.732191 containerd[1485]: time="2025-01-13T20:44:12.731647795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:44:12.732319 kubelet[2663]: E0113 20:44:12.732308 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:12.732469 containerd[1485]: time="2025-01-13T20:44:12.732449027Z" level=info msg="TearDown network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" successfully" Jan 13 20:44:12.733548 systemd[1]: run-netns-cni\x2d4852bcb5\x2d9b85\x2ddbc4\x2dee8f\x2dfce0010b0e81.mount: Deactivated successfully. Jan 13 20:44:12.734705 containerd[1485]: time="2025-01-13T20:44:12.734632556Z" level=info msg="StopPodSandbox for \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" returns successfully" Jan 13 20:44:12.735026 containerd[1485]: time="2025-01-13T20:44:12.734980757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:4,}" Jan 13 20:44:12.736536 containerd[1485]: time="2025-01-13T20:44:12.736513710Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\"" Jan 13 20:44:12.736729 containerd[1485]: time="2025-01-13T20:44:12.736667151Z" level=info msg="TearDown network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" successfully" Jan 13 20:44:12.736729 containerd[1485]: time="2025-01-13T20:44:12.736680927Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" returns successfully" Jan 13 20:44:12.737509 containerd[1485]: time="2025-01-13T20:44:12.737490496Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" Jan 13 20:44:12.737691 containerd[1485]: time="2025-01-13T20:44:12.737676859Z" level=info msg="TearDown network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" successfully" Jan 13 20:44:12.737838 containerd[1485]: time="2025-01-13T20:44:12.737823879Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" returns successfully" Jan 13 20:44:12.738358 containerd[1485]: time="2025-01-13T20:44:12.738323788Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:12.738447 containerd[1485]: time="2025-01-13T20:44:12.738417888Z" level=info msg="TearDown network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" successfully" Jan 13 20:44:12.738500 containerd[1485]: time="2025-01-13T20:44:12.738444768Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" returns successfully" Jan 13 20:44:12.739046 containerd[1485]: time="2025-01-13T20:44:12.738987620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:4,}" Jan 13 20:44:13.674322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448550118.mount: Deactivated successfully. Jan 13 20:44:13.964995 containerd[1485]: time="2025-01-13T20:44:13.964515619Z" level=error msg="Failed to destroy network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:13.964995 containerd[1485]: time="2025-01-13T20:44:13.964954393Z" level=error msg="encountered an error cleaning up failed sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:13.965566 containerd[1485]: time="2025-01-13T20:44:13.965008135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:13.966307 kubelet[2663]: E0113 20:44:13.966272 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:13.966676 kubelet[2663]: E0113 20:44:13.966336 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:13.966676 kubelet[2663]: E0113 20:44:13.966359 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-48828" Jan 13 20:44:13.966676 kubelet[2663]: E0113 20:44:13.966410 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-48828_calico-apiserver(10adf362-b95c-4368-9f5a-3041f3e43b8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-48828" podUID="10adf362-b95c-4368-9f5a-3041f3e43b8c" Jan 13 20:44:13.977307 containerd[1485]: time="2025-01-13T20:44:13.977256779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:13.979901 containerd[1485]: time="2025-01-13T20:44:13.979493688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:44:13.988113 containerd[1485]: time="2025-01-13T20:44:13.987094604Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:14.000754 containerd[1485]: time="2025-01-13T20:44:14.000713310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:14.002374 containerd[1485]: time="2025-01-13T20:44:14.002297970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.391989393s" Jan 13 20:44:14.002374 containerd[1485]: time="2025-01-13T20:44:14.002343085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:44:14.016280 containerd[1485]: time="2025-01-13T20:44:14.016144930Z" level=info msg="CreateContainer within sandbox \"5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:44:14.058544 containerd[1485]: time="2025-01-13T20:44:14.058489039Z" level=info msg="CreateContainer within sandbox \"5abcf3085cce234854dffafa3eb61a58dea765fdc019ee350be419a1793e8261\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a8117323abd4a352ff96faed9e62ad987dc35e5269b27c506ce2b6b48834afcf\"" Jan 13 20:44:14.060812 containerd[1485]: time="2025-01-13T20:44:14.060187705Z" level=info msg="StartContainer for \"a8117323abd4a352ff96faed9e62ad987dc35e5269b27c506ce2b6b48834afcf\"" Jan 13 20:44:14.077555 containerd[1485]: time="2025-01-13T20:44:14.077490398Z" level=error msg="Failed to destroy network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.077993 containerd[1485]: time="2025-01-13T20:44:14.077965710Z" level=error msg="encountered an error cleaning up failed sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.078091 containerd[1485]: time="2025-01-13T20:44:14.078064588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.079497 kubelet[2663]: E0113 20:44:14.078681 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.079497 kubelet[2663]: E0113 20:44:14.078744 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:14.079497 kubelet[2663]: E0113 20:44:14.078768 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" Jan 13 20:44:14.079802 kubelet[2663]: E0113 20:44:14.079461 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69dc874945-vclf2_calico-system(5379735e-d3c8-481c-ba97-384db8752ee4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" podUID="5379735e-d3c8-481c-ba97-384db8752ee4" Jan 13 20:44:14.080172 containerd[1485]: time="2025-01-13T20:44:14.080032655Z" level=error msg="Failed to destroy network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.080530 containerd[1485]: time="2025-01-13T20:44:14.080500563Z" level=error msg="encountered an error cleaning up failed sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.080648 containerd[1485]: time="2025-01-13T20:44:14.080622054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.080913 kubelet[2663]: E0113 20:44:14.080885 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.081145 kubelet[2663]: E0113 20:44:14.081127 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:14.081309 kubelet[2663]: E0113 20:44:14.081218 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-74fml" Jan 13 20:44:14.081309 kubelet[2663]: E0113 20:44:14.081283 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-74fml_calico-system(0740e80e-5301-4176-ac9e-7bf36ee863df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-74fml" podUID="0740e80e-5301-4176-ac9e-7bf36ee863df" Jan 13 20:44:14.081686 containerd[1485]: time="2025-01-13T20:44:14.081641770Z" level=error msg="Failed to destroy network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.082188 containerd[1485]: time="2025-01-13T20:44:14.082160145Z" level=error msg="encountered an error cleaning up failed sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.082330 containerd[1485]: time="2025-01-13T20:44:14.082305290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.082606 kubelet[2663]: E0113 20:44:14.082587 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.082806 kubelet[2663]: E0113 20:44:14.082699 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:14.082806 kubelet[2663]: E0113 20:44:14.082728 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" Jan 13 20:44:14.082806 kubelet[2663]: E0113 20:44:14.082781 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c845b497c-psj2v_calico-apiserver(0d99061e-8895-489c-be69-03c406284aa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" podUID="0d99061e-8895-489c-be69-03c406284aa9" Jan 13 20:44:14.096662 containerd[1485]: time="2025-01-13T20:44:14.096592277Z" level=error msg="Failed to destroy network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.097146 containerd[1485]: time="2025-01-13T20:44:14.097097196Z" level=error msg="encountered an error cleaning up failed sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.097207 containerd[1485]: time="2025-01-13T20:44:14.097153102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.097495 kubelet[2663]: E0113 20:44:14.097461 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.097597 kubelet[2663]: E0113 20:44:14.097539 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:14.097597 kubelet[2663]: E0113 20:44:14.097579 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qkvcn" Jan 13 20:44:14.097672 kubelet[2663]: E0113 20:44:14.097636 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qkvcn_kube-system(acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qkvcn" podUID="acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83" Jan 13 20:44:14.106280 containerd[1485]: time="2025-01-13T20:44:14.106214186Z" level=error msg="Failed to destroy network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.106943 containerd[1485]: time="2025-01-13T20:44:14.106902983Z" level=error msg="encountered an error cleaning up failed sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.107137 containerd[1485]: time="2025-01-13T20:44:14.106976453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.107258 kubelet[2663]: E0113 20:44:14.107232 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:44:14.107461 kubelet[2663]: E0113 20:44:14.107281 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:14.107461 kubelet[2663]: E0113 20:44:14.107340 2663 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9f87" Jan 13 20:44:14.107461 kubelet[2663]: E0113 20:44:14.107388 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z9f87_kube-system(24356970-d4ba-4d53-b9ce-72c96ee695b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z9f87" podUID="24356970-d4ba-4d53-b9ce-72c96ee695b4" Jan 13 20:44:14.151170 systemd[1]: Started cri-containerd-a8117323abd4a352ff96faed9e62ad987dc35e5269b27c506ce2b6b48834afcf.scope - libcontainer container a8117323abd4a352ff96faed9e62ad987dc35e5269b27c506ce2b6b48834afcf. Jan 13 20:44:14.193625 containerd[1485]: time="2025-01-13T20:44:14.193578896Z" level=info msg="StartContainer for \"a8117323abd4a352ff96faed9e62ad987dc35e5269b27c506ce2b6b48834afcf\" returns successfully" Jan 13 20:44:14.263431 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:44:14.264343 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:44:14.676334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872-shm.mount: Deactivated successfully. Jan 13 20:44:14.676454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9-shm.mount: Deactivated successfully. Jan 13 20:44:14.676539 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33-shm.mount: Deactivated successfully. Jan 13 20:44:14.676615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25-shm.mount: Deactivated successfully. Jan 13 20:44:14.735100 kubelet[2663]: I0113 20:44:14.735066 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33" Jan 13 20:44:14.735641 containerd[1485]: time="2025-01-13T20:44:14.735591862Z" level=info msg="StopPodSandbox for \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\"" Jan 13 20:44:14.736043 containerd[1485]: time="2025-01-13T20:44:14.735841676Z" level=info msg="Ensure that sandbox 4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33 in task-service has been cleanup successfully" Jan 13 20:44:14.736143 containerd[1485]: time="2025-01-13T20:44:14.736121909Z" level=info msg="TearDown network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\" successfully" Jan 13 20:44:14.736143 containerd[1485]: time="2025-01-13T20:44:14.736140333Z" level=info msg="StopPodSandbox for \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\" returns successfully" Jan 13 20:44:14.738177 containerd[1485]: time="2025-01-13T20:44:14.737592150Z" level=info msg="StopPodSandbox for \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\"" Jan 13 20:44:14.738177 containerd[1485]: time="2025-01-13T20:44:14.737747816Z" level=info msg="TearDown network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" successfully" Jan 13 20:44:14.738177 containerd[1485]: time="2025-01-13T20:44:14.737761582Z" level=info msg="StopPodSandbox for \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" returns successfully" Jan 13 20:44:14.738177 containerd[1485]: time="2025-01-13T20:44:14.738001678Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\"" Jan 13 20:44:14.738177 containerd[1485]: time="2025-01-13T20:44:14.738102549Z" level=info msg="TearDown network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" successfully" Jan 13 20:44:14.738177 containerd[1485]: time="2025-01-13T20:44:14.738116726Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" returns successfully" Jan 13 20:44:14.738580 containerd[1485]: time="2025-01-13T20:44:14.738482360Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" Jan 13 20:44:14.738628 containerd[1485]: time="2025-01-13T20:44:14.738577441Z" level=info msg="TearDown network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" successfully" Jan 13 20:44:14.738628 containerd[1485]: time="2025-01-13T20:44:14.738590797Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" returns successfully" Jan 13 20:44:14.738701 systemd[1]: run-netns-cni\x2d6fc02b18\x2df030\x2d33f4\x2df289\x2db643b71971c8.mount: Deactivated successfully. Jan 13 20:44:14.739307 containerd[1485]: time="2025-01-13T20:44:14.739038125Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:14.739307 containerd[1485]: time="2025-01-13T20:44:14.739162662Z" level=info msg="TearDown network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" successfully" Jan 13 20:44:14.739307 containerd[1485]: time="2025-01-13T20:44:14.739175015Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" returns successfully" Jan 13 20:44:14.740460 kubelet[2663]: I0113 20:44:14.740405 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9" Jan 13 20:44:14.741002 containerd[1485]: time="2025-01-13T20:44:14.740965746Z" level=info msg="StopPodSandbox for \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\"" Jan 13 20:44:14.741347 containerd[1485]: time="2025-01-13T20:44:14.741223846Z" level=info msg="Ensure that sandbox 1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9 in task-service has been cleanup successfully" Jan 13 20:44:14.741536 containerd[1485]: time="2025-01-13T20:44:14.741425288Z" level=info msg="TearDown network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\" successfully" Jan 13 20:44:14.741536 containerd[1485]: time="2025-01-13T20:44:14.741475523Z" level=info msg="StopPodSandbox for \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\" returns successfully" Jan 13 20:44:14.741843 containerd[1485]: time="2025-01-13T20:44:14.741823233Z" level=info msg="StopPodSandbox for \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\"" Jan 13 20:44:14.741916 containerd[1485]: time="2025-01-13T20:44:14.741900099Z" level=info msg="TearDown network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" successfully" Jan 13 20:44:14.741944 containerd[1485]: time="2025-01-13T20:44:14.741912965Z" level=info msg="StopPodSandbox for \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" returns successfully" Jan 13 20:44:14.742568 containerd[1485]: time="2025-01-13T20:44:14.742546918Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\"" Jan 13 20:44:14.742644 containerd[1485]: time="2025-01-13T20:44:14.742626208Z" level=info msg="TearDown network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" successfully" Jan 13 20:44:14.742644 containerd[1485]: time="2025-01-13T20:44:14.742639113Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" returns successfully" Jan 13 20:44:14.743305 kubelet[2663]: I0113 20:44:14.742879 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25" Jan 13 20:44:14.744115 containerd[1485]: time="2025-01-13T20:44:14.742890761Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" Jan 13 20:44:14.744115 containerd[1485]: time="2025-01-13T20:44:14.742961725Z" level=info msg="TearDown network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" successfully" Jan 13 20:44:14.744115 containerd[1485]: time="2025-01-13T20:44:14.742970022Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" returns successfully" Jan 13 20:44:14.744115 containerd[1485]: time="2025-01-13T20:44:14.743353409Z" level=info msg="StopPodSandbox for \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\"" Jan 13 20:44:14.744115 containerd[1485]: time="2025-01-13T20:44:14.743841896Z" level=info msg="Ensure that sandbox 7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25 in task-service has been cleanup successfully" Jan 13 20:44:14.744248 containerd[1485]: time="2025-01-13T20:44:14.743403805Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:14.744298 containerd[1485]: time="2025-01-13T20:44:14.743566905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:5,}" Jan 13 20:44:14.744411 containerd[1485]: time="2025-01-13T20:44:14.744381511Z" level=info msg="TearDown network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\" successfully" Jan 13 20:44:14.744461 containerd[1485]: time="2025-01-13T20:44:14.744425755Z" level=info msg="TearDown network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" successfully" Jan 13 20:44:14.744461 containerd[1485]: time="2025-01-13T20:44:14.744443309Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" returns successfully" Jan 13 20:44:14.744541 systemd[1]: run-netns-cni\x2d015abd18\x2d08cd\x2dc360\x2d5709\x2d5631f89dadf4.mount: Deactivated successfully. Jan 13 20:44:14.744930 containerd[1485]: time="2025-01-13T20:44:14.744576782Z" level=info msg="StopPodSandbox for \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\" returns successfully" Jan 13 20:44:14.745549 kubelet[2663]: E0113 20:44:14.745517 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.745812618Z" level=info msg="StopPodSandbox for \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\"" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.745894344Z" level=info msg="TearDown network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" successfully" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.745903711Z" level=info msg="StopPodSandbox for \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" returns successfully" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.745824501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:5,}" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.746238126Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\"" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.746385817Z" level=info msg="TearDown network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" successfully" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.746397258Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" returns successfully" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.746584645Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.746673894Z" level=info msg="TearDown network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" successfully" Jan 13 20:44:14.747034 containerd[1485]: time="2025-01-13T20:44:14.746685887Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" returns successfully" Jan 13 20:44:14.747742 containerd[1485]: time="2025-01-13T20:44:14.747598059Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:14.747742 containerd[1485]: time="2025-01-13T20:44:14.747682529Z" level=info msg="TearDown network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" successfully" Jan 13 20:44:14.747742 containerd[1485]: time="2025-01-13T20:44:14.747693360Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" returns successfully" Jan 13 20:44:14.748663 containerd[1485]: time="2025-01-13T20:44:14.748235088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:44:14.749510 kubelet[2663]: I0113 20:44:14.749143 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872" Jan 13 20:44:14.749551 systemd[1]: run-netns-cni\x2d6f9cf54c\x2d76f0\x2dccf0\x2def7e\x2de6976b660616.mount: Deactivated successfully. Jan 13 20:44:14.749793 containerd[1485]: time="2025-01-13T20:44:14.749765444Z" level=info msg="StopPodSandbox for \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\"" Jan 13 20:44:14.749962 containerd[1485]: time="2025-01-13T20:44:14.749935406Z" level=info msg="Ensure that sandbox a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872 in task-service has been cleanup successfully" Jan 13 20:44:14.750183 containerd[1485]: time="2025-01-13T20:44:14.750154412Z" level=info msg="TearDown network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\" successfully" Jan 13 20:44:14.750183 containerd[1485]: time="2025-01-13T20:44:14.750173749Z" level=info msg="StopPodSandbox for \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\" returns successfully" Jan 13 20:44:14.751270 containerd[1485]: time="2025-01-13T20:44:14.751173508Z" level=info msg="StopPodSandbox for \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\"" Jan 13 20:44:14.751270 containerd[1485]: time="2025-01-13T20:44:14.751250673Z" level=info msg="TearDown network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" successfully" Jan 13 20:44:14.751270 containerd[1485]: time="2025-01-13T20:44:14.751259731Z" level=info msg="StopPodSandbox for \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" returns successfully" Jan 13 20:44:14.752100 containerd[1485]: time="2025-01-13T20:44:14.751806209Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\"" Jan 13 20:44:14.752100 containerd[1485]: time="2025-01-13T20:44:14.751879397Z" level=info msg="TearDown network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" successfully" Jan 13 20:44:14.752100 containerd[1485]: time="2025-01-13T20:44:14.751889136Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" returns successfully" Jan 13 20:44:14.752307 containerd[1485]: time="2025-01-13T20:44:14.752211398Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" Jan 13 20:44:14.752307 containerd[1485]: time="2025-01-13T20:44:14.752283174Z" level=info msg="TearDown network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" successfully" Jan 13 20:44:14.752307 containerd[1485]: time="2025-01-13T20:44:14.752291991Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" returns successfully" Jan 13 20:44:14.752789 containerd[1485]: time="2025-01-13T20:44:14.752659409Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:14.752923 containerd[1485]: time="2025-01-13T20:44:14.752870119Z" level=info msg="TearDown network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" successfully" Jan 13 20:44:14.752923 containerd[1485]: time="2025-01-13T20:44:14.752883273Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" returns successfully" Jan 13 20:44:14.753533 containerd[1485]: time="2025-01-13T20:44:14.753364888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:44:14.754193 kubelet[2663]: I0113 20:44:14.753843 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3" Jan 13 20:44:14.754101 systemd[1]: run-netns-cni\x2daddb0be7\x2d1d46\x2d53cd\x2d76d4\x2d571f796a2ed0.mount: Deactivated successfully. Jan 13 20:44:14.754350 containerd[1485]: time="2025-01-13T20:44:14.754292139Z" level=info msg="StopPodSandbox for \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\"" Jan 13 20:44:14.754797 containerd[1485]: time="2025-01-13T20:44:14.754525692Z" level=info msg="Ensure that sandbox 7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3 in task-service has been cleanup successfully" Jan 13 20:44:14.754797 containerd[1485]: time="2025-01-13T20:44:14.754742363Z" level=info msg="TearDown network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\" successfully" Jan 13 20:44:14.754797 containerd[1485]: time="2025-01-13T20:44:14.754753124Z" level=info msg="StopPodSandbox for \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\" returns successfully" Jan 13 20:44:14.755369 containerd[1485]: time="2025-01-13T20:44:14.755279854Z" level=info msg="StopPodSandbox for \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\"" Jan 13 20:44:14.755631 containerd[1485]: time="2025-01-13T20:44:14.755611614Z" level=info msg="TearDown network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" successfully" Jan 13 20:44:14.755631 containerd[1485]: time="2025-01-13T20:44:14.755629217Z" level=info msg="StopPodSandbox for \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" returns successfully" Jan 13 20:44:14.756126 containerd[1485]: time="2025-01-13T20:44:14.756008828Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\"" Jan 13 20:44:14.756126 containerd[1485]: time="2025-01-13T20:44:14.756094741Z" level=info msg="TearDown network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" successfully" Jan 13 20:44:14.756126 containerd[1485]: time="2025-01-13T20:44:14.756105482Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" returns successfully" Jan 13 20:44:14.756633 containerd[1485]: time="2025-01-13T20:44:14.756563402Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" Jan 13 20:44:14.756749 containerd[1485]: time="2025-01-13T20:44:14.756707865Z" level=info msg="TearDown network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" successfully" Jan 13 20:44:14.756749 containerd[1485]: time="2025-01-13T20:44:14.756729848Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" returns successfully" Jan 13 20:44:14.757264 containerd[1485]: time="2025-01-13T20:44:14.757078309Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:14.757264 containerd[1485]: time="2025-01-13T20:44:14.757176114Z" level=info msg="TearDown network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" successfully" Jan 13 20:44:14.757264 containerd[1485]: time="2025-01-13T20:44:14.757185923Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" returns successfully" Jan 13 20:44:14.758540 containerd[1485]: time="2025-01-13T20:44:14.757754062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:5,}" Jan 13 20:44:14.758588 kubelet[2663]: E0113 20:44:14.757389 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:14.758933 kubelet[2663]: E0113 20:44:14.758904 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:14.762733 kubelet[2663]: I0113 20:44:14.762608 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a" Jan 13 20:44:14.763059 containerd[1485]: time="2025-01-13T20:44:14.763035529Z" level=info msg="StopPodSandbox for \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\"" Jan 13 20:44:14.763256 containerd[1485]: time="2025-01-13T20:44:14.763211384Z" level=info msg="Ensure that sandbox e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a in task-service has been cleanup successfully" Jan 13 20:44:14.763519 containerd[1485]: time="2025-01-13T20:44:14.763421263Z" level=info msg="TearDown network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\" successfully" Jan 13 20:44:14.763519 containerd[1485]: time="2025-01-13T20:44:14.763445949Z" level=info msg="StopPodSandbox for \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\" returns successfully" Jan 13 20:44:14.764082 containerd[1485]: time="2025-01-13T20:44:14.764026782Z" level=info msg="StopPodSandbox for \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\"" Jan 13 20:44:14.764251 containerd[1485]: time="2025-01-13T20:44:14.764225089Z" level=info msg="TearDown network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" successfully" Jan 13 20:44:14.764286 containerd[1485]: time="2025-01-13T20:44:14.764251989Z" level=info msg="StopPodSandbox for \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" returns successfully" Jan 13 20:44:14.765614 containerd[1485]: time="2025-01-13T20:44:14.765529445Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\"" Jan 13 20:44:14.765716 containerd[1485]: time="2025-01-13T20:44:14.765648661Z" level=info msg="TearDown network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" successfully" Jan 13 20:44:14.765716 containerd[1485]: time="2025-01-13T20:44:14.765660815Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" returns successfully" Jan 13 20:44:14.766197 containerd[1485]: time="2025-01-13T20:44:14.766159562Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" Jan 13 20:44:14.766296 containerd[1485]: time="2025-01-13T20:44:14.766252668Z" level=info msg="TearDown network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" successfully" Jan 13 20:44:14.766296 containerd[1485]: time="2025-01-13T20:44:14.766266234Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" returns successfully" Jan 13 20:44:14.766543 containerd[1485]: time="2025-01-13T20:44:14.766521158Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:14.766626 containerd[1485]: time="2025-01-13T20:44:14.766607682Z" level=info msg="TearDown network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" successfully" Jan 13 20:44:14.766626 containerd[1485]: time="2025-01-13T20:44:14.766621950Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" returns successfully" Jan 13 20:44:14.767294 containerd[1485]: time="2025-01-13T20:44:14.767264259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:5,}" Jan 13 20:44:14.777624 kubelet[2663]: I0113 20:44:14.777503 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-cqqtj" podStartSLOduration=1.9295166049999999 podStartE2EDuration="18.777451822s" podCreationTimestamp="2025-01-13 20:43:56 +0000 UTC" firstStartedPulling="2025-01-13 20:43:57.154860006 +0000 UTC m=+22.040056135" lastFinishedPulling="2025-01-13 20:44:14.002795223 +0000 UTC m=+38.887991352" observedRunningTime="2025-01-13 20:44:14.776498372 +0000 UTC m=+39.661694501" watchObservedRunningTime="2025-01-13 20:44:14.777451822 +0000 UTC m=+39.662647941" Jan 13 20:44:15.116500 systemd-networkd[1423]: cali0e5b9a4a2d8: Link UP Jan 13 20:44:15.117443 systemd-networkd[1423]: cali0e5b9a4a2d8: Gained carrier Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:14.898 [INFO][4723] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:14.919 [INFO][4723] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--z9f87-eth0 coredns-76f75df574- kube-system 24356970-d4ba-4d53-b9ce-72c96ee695b4 747 0 2025-01-13 20:43:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-z9f87 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e5b9a4a2d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:14.920 [INFO][4723] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-eth0" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:14.991 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" HandleID="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Workload="localhost-k8s-coredns--76f75df574--z9f87-eth0" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.012 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" HandleID="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Workload="localhost-k8s-coredns--76f75df574--z9f87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-z9f87", "timestamp":"2025-01-13 20:44:14.991219375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.012 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.012 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.012 [INFO][4778] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.014 [INFO][4778] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.019 [INFO][4778] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.023 [INFO][4778] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.024 [INFO][4778] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.026 [INFO][4778] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.026 [INFO][4778] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.027 [INFO][4778] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.057 [INFO][4778] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.100 [INFO][4778] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.100 [INFO][4778] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" host="localhost" Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.100 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:44:15.261371 containerd[1485]: 2025-01-13 20:44:15.100 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" HandleID="k8s-pod-network.0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Workload="localhost-k8s-coredns--76f75df574--z9f87-eth0" Jan 13 20:44:15.262703 containerd[1485]: 2025-01-13 20:44:15.104 [INFO][4723] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z9f87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"24356970-d4ba-4d53-b9ce-72c96ee695b4", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-z9f87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e5b9a4a2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.262703 containerd[1485]: 2025-01-13 20:44:15.104 [INFO][4723] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-eth0" Jan 13 20:44:15.262703 containerd[1485]: 2025-01-13 20:44:15.104 [INFO][4723] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e5b9a4a2d8 ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-eth0" Jan 13 20:44:15.262703 containerd[1485]: 2025-01-13 20:44:15.117 [INFO][4723] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-eth0" Jan 13 20:44:15.262703 containerd[1485]: 2025-01-13 20:44:15.118 [INFO][4723] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z9f87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"24356970-d4ba-4d53-b9ce-72c96ee695b4", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f", Pod:"coredns-76f75df574-z9f87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e5b9a4a2d8", MAC:"7e:e0:79:7f:95:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.262703 containerd[1485]: 2025-01-13 20:44:15.258 [INFO][4723] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f" Namespace="kube-system" Pod="coredns-76f75df574-z9f87" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z9f87-eth0" Jan 13 20:44:15.304041 systemd-networkd[1423]: cali4b9c6fc0cc5: Link UP Jan 13 20:44:15.304275 systemd-networkd[1423]: cali4b9c6fc0cc5: Gained carrier Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:14.903 [INFO][4702] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:14.919 [INFO][4702] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--qkvcn-eth0 coredns-76f75df574- kube-system acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83 753 0 2025-01-13 20:43:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-qkvcn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b9c6fc0cc5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:14.919 [INFO][4702] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-eth0" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:14.999 [INFO][4777] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" HandleID="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Workload="localhost-k8s-coredns--76f75df574--qkvcn-eth0" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.013 [INFO][4777] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" HandleID="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Workload="localhost-k8s-coredns--76f75df574--qkvcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318670), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-qkvcn", "timestamp":"2025-01-13 20:44:14.999341817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.013 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.100 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.100 [INFO][4777] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.103 [INFO][4777] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.107 [INFO][4777] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.115 [INFO][4777] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.239 [INFO][4777] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.258 [INFO][4777] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.258 [INFO][4777] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.260 [INFO][4777] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5 Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.267 [INFO][4777] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.298 [INFO][4777] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.298 [INFO][4777] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" host="localhost" Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.298 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:44:15.334205 containerd[1485]: 2025-01-13 20:44:15.298 [INFO][4777] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" HandleID="k8s-pod-network.eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Workload="localhost-k8s-coredns--76f75df574--qkvcn-eth0" Jan 13 20:44:15.334877 containerd[1485]: 2025-01-13 20:44:15.301 [INFO][4702] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qkvcn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-qkvcn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b9c6fc0cc5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.334877 containerd[1485]: 2025-01-13 20:44:15.301 [INFO][4702] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-eth0" Jan 13 20:44:15.334877 containerd[1485]: 2025-01-13 20:44:15.301 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b9c6fc0cc5 ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-eth0" Jan 13 20:44:15.334877 containerd[1485]: 2025-01-13 20:44:15.304 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-eth0" Jan 13 20:44:15.334877 containerd[1485]: 2025-01-13 20:44:15.304 [INFO][4702] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qkvcn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5", Pod:"coredns-76f75df574-qkvcn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b9c6fc0cc5", MAC:"66:75:7b:02:9f:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.334877 containerd[1485]: 2025-01-13 20:44:15.331 [INFO][4702] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5" Namespace="kube-system" Pod="coredns-76f75df574-qkvcn" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qkvcn-eth0" Jan 13 20:44:15.552361 systemd-networkd[1423]: cali21c78d816ed: Link UP Jan 13 20:44:15.554601 systemd-networkd[1423]: cali21c78d816ed: Gained carrier Jan 13 20:44:15.570389 containerd[1485]: time="2025-01-13T20:44:15.569578113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:15.570389 containerd[1485]: time="2025-01-13T20:44:15.569651042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:15.570389 containerd[1485]: time="2025-01-13T20:44:15.569664978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.570389 containerd[1485]: time="2025-01-13T20:44:15.569756993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:14.846 [INFO][4686] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:14.877 [INFO][4686] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c845b497c--48828-eth0 calico-apiserver-c845b497c- calico-apiserver 10adf362-b95c-4368-9f5a-3041f3e43b8c 752 0 2025-01-13 20:43:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c845b497c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c845b497c-48828 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21c78d816ed [] []}} ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:14.877 [INFO][4686] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:14.997 [INFO][4759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" HandleID="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Workload="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.013 [INFO][4759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" HandleID="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Workload="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000511e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c845b497c-48828", "timestamp":"2025-01-13 20:44:14.997664903 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.014 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.298 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.299 [INFO][4759] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.300 [INFO][4759] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.305 [INFO][4759] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.309 [INFO][4759] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.311 [INFO][4759] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.331 [INFO][4759] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.331 [INFO][4759] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.333 [INFO][4759] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201 Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.406 [INFO][4759] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.540 [INFO][4759] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.540 [INFO][4759] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" host="localhost" Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.540 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:44:15.575815 containerd[1485]: 2025-01-13 20:44:15.540 [INFO][4759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" HandleID="k8s-pod-network.612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Workload="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" Jan 13 20:44:15.576395 containerd[1485]: 2025-01-13 20:44:15.546 [INFO][4686] cni-plugin/k8s.go 386: Populated endpoint ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c845b497c--48828-eth0", GenerateName:"calico-apiserver-c845b497c-", Namespace:"calico-apiserver", SelfLink:"", UID:"10adf362-b95c-4368-9f5a-3041f3e43b8c", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c845b497c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c845b497c-48828", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21c78d816ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.576395 containerd[1485]: 2025-01-13 20:44:15.546 [INFO][4686] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" Jan 13 20:44:15.576395 containerd[1485]: 2025-01-13 20:44:15.546 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21c78d816ed ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" Jan 13 20:44:15.576395 containerd[1485]: 2025-01-13 20:44:15.555 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" Jan 13 20:44:15.576395 containerd[1485]: 2025-01-13 20:44:15.557 [INFO][4686] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c845b497c--48828-eth0", GenerateName:"calico-apiserver-c845b497c-", Namespace:"calico-apiserver", SelfLink:"", UID:"10adf362-b95c-4368-9f5a-3041f3e43b8c", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c845b497c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201", Pod:"calico-apiserver-c845b497c-48828", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21c78d816ed", MAC:"4a:05:b2:24:9c:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.576395 containerd[1485]: 2025-01-13 20:44:15.571 [INFO][4686] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-48828" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--48828-eth0" Jan 13 20:44:15.579370 containerd[1485]: time="2025-01-13T20:44:15.578664260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:15.579370 containerd[1485]: time="2025-01-13T20:44:15.578717832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:15.579801 containerd[1485]: time="2025-01-13T20:44:15.579655762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.580072 containerd[1485]: time="2025-01-13T20:44:15.579967243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.585105 systemd[1]: Started sshd@9-10.0.0.148:22-10.0.0.1:50056.service - OpenSSH per-connection server daemon (10.0.0.1:50056). Jan 13 20:44:15.602642 systemd-networkd[1423]: cali2cc572eb597: Link UP Jan 13 20:44:15.603230 systemd-networkd[1423]: cali2cc572eb597: Gained carrier Jan 13 20:44:15.618187 systemd[1]: Started cri-containerd-eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5.scope - libcontainer container eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5. Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:14.895 [INFO][4716] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:14.921 [INFO][4716] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0 calico-apiserver-c845b497c- calico-apiserver 0d99061e-8895-489c-be69-03c406284aa9 751 0 2025-01-13 20:43:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c845b497c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c845b497c-psj2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cc572eb597 [] []}} ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:14.922 [INFO][4716] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:14.994 [INFO][4786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" HandleID="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Workload="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.014 [INFO][4786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" HandleID="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Workload="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038edf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c845b497c-psj2v", "timestamp":"2025-01-13 20:44:14.99465119 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.014 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.541 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.541 [INFO][4786] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.545 [INFO][4786] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.554 [INFO][4786] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.559 [INFO][4786] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.564 [INFO][4786] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.567 [INFO][4786] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.567 [INFO][4786] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.571 [INFO][4786] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4 Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.575 [INFO][4786] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.586 [INFO][4786] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.586 [INFO][4786] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" host="localhost" Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.586 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:44:15.624564 containerd[1485]: 2025-01-13 20:44:15.586 [INFO][4786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" HandleID="k8s-pod-network.e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Workload="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" Jan 13 20:44:15.626155 containerd[1485]: 2025-01-13 20:44:15.594 [INFO][4716] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0", GenerateName:"calico-apiserver-c845b497c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d99061e-8895-489c-be69-03c406284aa9", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c845b497c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c845b497c-psj2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc572eb597", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.626155 containerd[1485]: 2025-01-13 20:44:15.594 [INFO][4716] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" Jan 13 20:44:15.626155 containerd[1485]: 2025-01-13 20:44:15.594 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cc572eb597 ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" Jan 13 20:44:15.626155 containerd[1485]: 2025-01-13 20:44:15.599 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" Jan 13 20:44:15.626155 containerd[1485]: 2025-01-13 20:44:15.599 [INFO][4716] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0", GenerateName:"calico-apiserver-c845b497c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d99061e-8895-489c-be69-03c406284aa9", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c845b497c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4", Pod:"calico-apiserver-c845b497c-psj2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc572eb597", MAC:"76:f4:a9:6c:c0:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.626155 containerd[1485]: 2025-01-13 20:44:15.610 [INFO][4716] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4" Namespace="calico-apiserver" Pod="calico-apiserver-c845b497c-psj2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--c845b497c--psj2v-eth0" Jan 13 20:44:15.633545 systemd[1]: Started cri-containerd-0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f.scope - libcontainer container 0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f. Jan 13 20:44:15.640874 containerd[1485]: time="2025-01-13T20:44:15.640451121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:15.640874 containerd[1485]: time="2025-01-13T20:44:15.640603440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:15.640874 containerd[1485]: time="2025-01-13T20:44:15.640663855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.641231 containerd[1485]: time="2025-01-13T20:44:15.641167201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.648087 sshd[4872]: Accepted publickey for core from 10.0.0.1 port 50056 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:15.655412 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:15.695684 systemd[1]: run-netns-cni\x2de3d723ba\x2d51eb\x2d202e\x2d016f\x2da96faa43d69e.mount: Deactivated successfully. Jan 13 20:44:15.695832 systemd[1]: run-netns-cni\x2d07bb059b\x2db451\x2d58d9\x2d91f5\x2d96116575d317.mount: Deactivated successfully. Jan 13 20:44:15.713313 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:44:15.714320 systemd-networkd[1423]: cali37b6fa3b3be: Link UP Jan 13 20:44:15.721506 systemd-logind[1473]: New session 10 of user core. Jan 13 20:44:15.722781 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:44:15.725136 systemd-networkd[1423]: cali37b6fa3b3be: Gained carrier Jan 13 20:44:15.736320 systemd[1]: Started cri-containerd-612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201.scope - libcontainer container 612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201. Jan 13 20:44:15.745934 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:14.842 [INFO][4675] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:14.875 [INFO][4675] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0 calico-kube-controllers-69dc874945- calico-system 5379735e-d3c8-481c-ba97-384db8752ee4 754 0 2025-01-13 20:43:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69dc874945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69dc874945-vclf2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali37b6fa3b3be [] []}} ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:14.875 [INFO][4675] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:14.998 [INFO][4757] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" HandleID="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Workload="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.014 [INFO][4757] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" HandleID="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Workload="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69dc874945-vclf2", "timestamp":"2025-01-13 20:44:14.998372066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.015 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.586 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.587 [INFO][4757] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.590 [INFO][4757] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.598 [INFO][4757] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.618 [INFO][4757] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.621 [INFO][4757] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.623 [INFO][4757] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.623 [INFO][4757] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.624 [INFO][4757] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384 Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.629 [INFO][4757] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.641 [INFO][4757] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.641 [INFO][4757] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" host="localhost" Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.641 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:44:15.756978 containerd[1485]: 2025-01-13 20:44:15.641 [INFO][4757] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" HandleID="k8s-pod-network.e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Workload="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" Jan 13 20:44:15.757784 containerd[1485]: 2025-01-13 20:44:15.704 [INFO][4675] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0", GenerateName:"calico-kube-controllers-69dc874945-", Namespace:"calico-system", SelfLink:"", UID:"5379735e-d3c8-481c-ba97-384db8752ee4", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dc874945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69dc874945-vclf2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37b6fa3b3be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.757784 containerd[1485]: 2025-01-13 20:44:15.705 [INFO][4675] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" Jan 13 20:44:15.757784 containerd[1485]: 2025-01-13 20:44:15.705 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37b6fa3b3be ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" Jan 13 20:44:15.757784 containerd[1485]: 2025-01-13 20:44:15.726 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" Jan 13 20:44:15.757784 containerd[1485]: 2025-01-13 20:44:15.729 [INFO][4675] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0", GenerateName:"calico-kube-controllers-69dc874945-", Namespace:"calico-system", SelfLink:"", UID:"5379735e-d3c8-481c-ba97-384db8752ee4", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dc874945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384", Pod:"calico-kube-controllers-69dc874945-vclf2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37b6fa3b3be", MAC:"5e:92:47:3c:09:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.757784 containerd[1485]: 2025-01-13 20:44:15.745 [INFO][4675] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384" Namespace="calico-system" Pod="calico-kube-controllers-69dc874945-vclf2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69dc874945--vclf2-eth0" Jan 13 20:44:15.766034 kubelet[2663]: I0113 20:44:15.765105 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:44:15.766034 kubelet[2663]: E0113 20:44:15.765812 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:15.795325 containerd[1485]: time="2025-01-13T20:44:15.792489132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:15.795325 containerd[1485]: time="2025-01-13T20:44:15.792555909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:15.795325 containerd[1485]: time="2025-01-13T20:44:15.792570296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.800844 containerd[1485]: time="2025-01-13T20:44:15.799710389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.819134 containerd[1485]: time="2025-01-13T20:44:15.818873995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvcn,Uid:acbb54e8-9cf8-4618-abd6-3a9f5bc9cb83,Namespace:kube-system,Attempt:5,} returns sandbox id \"eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5\"" Jan 13 20:44:15.824520 kubelet[2663]: E0113 20:44:15.823922 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:15.832065 containerd[1485]: time="2025-01-13T20:44:15.831829506Z" level=info msg="CreateContainer within sandbox \"eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:44:15.856847 systemd-networkd[1423]: cali5ff0f629f44: Link UP Jan 13 20:44:15.857985 systemd-networkd[1423]: cali5ff0f629f44: Gained carrier Jan 13 20:44:15.862487 systemd[1]: Started cri-containerd-e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4.scope - libcontainer container e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4. Jan 13 20:44:15.867047 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:44:15.875080 containerd[1485]: time="2025-01-13T20:44:15.875040076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9f87,Uid:24356970-d4ba-4d53-b9ce-72c96ee695b4,Namespace:kube-system,Attempt:5,} returns sandbox id \"0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f\"" Jan 13 20:44:15.876960 kubelet[2663]: E0113 20:44:15.876238 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:15.884641 containerd[1485]: time="2025-01-13T20:44:15.884489823Z" level=info msg="CreateContainer within sandbox \"0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:44:15.905503 containerd[1485]: time="2025-01-13T20:44:15.905241413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:15.905503 containerd[1485]: time="2025-01-13T20:44:15.905317378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:15.905503 containerd[1485]: time="2025-01-13T20:44:15.905334921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.905503 containerd[1485]: time="2025-01-13T20:44:15.905466931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.921547 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:44:15.925530 containerd[1485]: time="2025-01-13T20:44:15.925493144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-48828,Uid:10adf362-b95c-4368-9f5a-3041f3e43b8c,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201\"" Jan 13 20:44:15.927440 containerd[1485]: time="2025-01-13T20:44:15.927393742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:44:15.941277 systemd[1]: Started cri-containerd-e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384.scope - libcontainer container e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384. Jan 13 20:44:15.958523 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:44:15.969908 sshd[5035]: Connection closed by 10.0.0.1 port 50056 Jan 13 20:44:15.970650 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:14.879 [INFO][4740] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:14.891 [INFO][4740] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--74fml-eth0 csi-node-driver- calico-system 0740e80e-5301-4176-ac9e-7bf36ee863df 604 0 2025-01-13 20:43:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-74fml eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5ff0f629f44 [] []}} ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:14.891 [INFO][4740] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-eth0" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.008 [INFO][4772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" HandleID="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Workload="localhost-k8s-csi--node--driver--74fml-eth0" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.016 [INFO][4772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" HandleID="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Workload="localhost-k8s-csi--node--driver--74fml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-74fml", "timestamp":"2025-01-13 20:44:15.008579022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.016 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.648 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.648 [INFO][4772] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.672 [INFO][4772] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.708 [INFO][4772] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.728 [INFO][4772] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.737 [INFO][4772] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.747 [INFO][4772] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.755 [INFO][4772] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.761 [INFO][4772] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1 Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.769 [INFO][4772] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.801 [INFO][4772] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.801 [INFO][4772] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" host="localhost" Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.801 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:44:15.975521 containerd[1485]: 2025-01-13 20:44:15.801 [INFO][4772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" HandleID="k8s-pod-network.f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Workload="localhost-k8s-csi--node--driver--74fml-eth0" Jan 13 20:44:15.976009 containerd[1485]: 2025-01-13 20:44:15.842 [INFO][4740] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--74fml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0740e80e-5301-4176-ac9e-7bf36ee863df", ResourceVersion:"604", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-74fml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ff0f629f44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.976009 containerd[1485]: 2025-01-13 20:44:15.842 [INFO][4740] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-eth0" Jan 13 20:44:15.976009 containerd[1485]: 2025-01-13 20:44:15.842 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ff0f629f44 ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-eth0" Jan 13 20:44:15.976009 containerd[1485]: 2025-01-13 20:44:15.859 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-eth0" Jan 13 20:44:15.976009 containerd[1485]: 2025-01-13 20:44:15.866 [INFO][4740] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--74fml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0740e80e-5301-4176-ac9e-7bf36ee863df", ResourceVersion:"604", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 43, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1", Pod:"csi-node-driver-74fml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ff0f629f44", MAC:"42:11:c8:f7:e6:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:44:15.976009 containerd[1485]: 2025-01-13 20:44:15.963 [INFO][4740] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1" Namespace="calico-system" Pod="csi-node-driver-74fml" WorkloadEndpoint="localhost-k8s-csi--node--driver--74fml-eth0" Jan 13 20:44:15.979409 containerd[1485]: time="2025-01-13T20:44:15.979083046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c845b497c-psj2v,Uid:0d99061e-8895-489c-be69-03c406284aa9,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4\"" Jan 13 20:44:15.981887 systemd[1]: sshd@9-10.0.0.148:22-10.0.0.1:50056.service: Deactivated successfully. Jan 13 20:44:15.985333 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:44:15.989667 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:44:15.991223 systemd-logind[1473]: Removed session 10. Jan 13 20:44:15.994372 containerd[1485]: time="2025-01-13T20:44:15.994273099Z" level=info msg="CreateContainer within sandbox \"eb9a636526d865b44926b54a9ee22d502857324f97f81469a6bd3f79cbe3b4f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c5021a7ebb244141e8fed148a3a80282e58d2bdbef175d855c2f0c779302d57\"" Jan 13 20:44:15.996661 containerd[1485]: time="2025-01-13T20:44:15.996633920Z" level=info msg="StartContainer for \"4c5021a7ebb244141e8fed148a3a80282e58d2bdbef175d855c2f0c779302d57\"" Jan 13 20:44:16.005951 containerd[1485]: time="2025-01-13T20:44:16.005399687Z" level=info msg="CreateContainer within sandbox \"0d67fe558ed179179b8b04970bb62793ee1056224f6a131fd45f14eebdbd3c0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0360c75ea0696937f53c11b707f56cc23c8a959af6df4f429674495671076a19\"" Jan 13 20:44:16.007241 containerd[1485]: time="2025-01-13T20:44:16.006423609Z" level=info msg="StartContainer for \"0360c75ea0696937f53c11b707f56cc23c8a959af6df4f429674495671076a19\"" Jan 13 20:44:16.018006 containerd[1485]: time="2025-01-13T20:44:16.017961775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dc874945-vclf2,Uid:5379735e-d3c8-481c-ba97-384db8752ee4,Namespace:calico-system,Attempt:5,} returns sandbox id \"e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384\"" Jan 13 20:44:16.039523 containerd[1485]: time="2025-01-13T20:44:16.039228331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:16.040424 containerd[1485]: time="2025-01-13T20:44:16.040363174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:16.041185 containerd[1485]: time="2025-01-13T20:44:16.040528768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:16.041185 containerd[1485]: time="2025-01-13T20:44:16.040883060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:16.050057 kernel: bpftool[5277]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:44:16.055650 systemd[1]: Started cri-containerd-4c5021a7ebb244141e8fed148a3a80282e58d2bdbef175d855c2f0c779302d57.scope - libcontainer container 4c5021a7ebb244141e8fed148a3a80282e58d2bdbef175d855c2f0c779302d57. Jan 13 20:44:16.071217 systemd[1]: Started cri-containerd-0360c75ea0696937f53c11b707f56cc23c8a959af6df4f429674495671076a19.scope - libcontainer container 0360c75ea0696937f53c11b707f56cc23c8a959af6df4f429674495671076a19. Jan 13 20:44:16.076412 systemd[1]: Started cri-containerd-f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1.scope - libcontainer container f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1. Jan 13 20:44:16.097769 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:44:16.240601 containerd[1485]: time="2025-01-13T20:44:16.240546007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-74fml,Uid:0740e80e-5301-4176-ac9e-7bf36ee863df,Namespace:calico-system,Attempt:5,} returns sandbox id \"f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1\"" Jan 13 20:44:16.240813 containerd[1485]: time="2025-01-13T20:44:16.240656135Z" level=info msg="StartContainer for \"0360c75ea0696937f53c11b707f56cc23c8a959af6df4f429674495671076a19\" returns successfully" Jan 13 20:44:16.240921 containerd[1485]: time="2025-01-13T20:44:16.240870723Z" level=info msg="StartContainer for \"4c5021a7ebb244141e8fed148a3a80282e58d2bdbef175d855c2f0c779302d57\" returns successfully" Jan 13 20:44:16.356411 systemd-networkd[1423]: vxlan.calico: Link UP Jan 13 20:44:16.356421 systemd-networkd[1423]: vxlan.calico: Gained carrier Jan 13 20:44:16.538199 systemd-networkd[1423]: cali0e5b9a4a2d8: Gained IPv6LL Jan 13 20:44:16.770444 kubelet[2663]: E0113 20:44:16.770383 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:16.775540 kubelet[2663]: E0113 20:44:16.775511 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:16.781311 kubelet[2663]: I0113 20:44:16.781219 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qkvcn" podStartSLOduration=25.781172474 podStartE2EDuration="25.781172474s" podCreationTimestamp="2025-01-13 20:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:44:16.780777154 +0000 UTC m=+41.665973273" watchObservedRunningTime="2025-01-13 20:44:16.781172474 +0000 UTC m=+41.666368624" Jan 13 20:44:16.798981 kubelet[2663]: I0113 20:44:16.798905 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-z9f87" podStartSLOduration=25.798851823 podStartE2EDuration="25.798851823s" podCreationTimestamp="2025-01-13 20:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:44:16.79690027 +0000 UTC m=+41.682096399" watchObservedRunningTime="2025-01-13 20:44:16.798851823 +0000 UTC m=+41.684047953" Jan 13 20:44:16.859325 systemd-networkd[1423]: cali21c78d816ed: Gained IPv6LL Jan 13 20:44:16.922259 systemd-networkd[1423]: cali2cc572eb597: Gained IPv6LL Jan 13 20:44:17.050280 systemd-networkd[1423]: cali4b9c6fc0cc5: Gained IPv6LL Jan 13 20:44:17.178255 systemd-networkd[1423]: cali5ff0f629f44: Gained IPv6LL Jan 13 20:44:17.562287 systemd-networkd[1423]: cali37b6fa3b3be: Gained IPv6LL Jan 13 20:44:17.626593 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Jan 13 20:44:17.792673 kubelet[2663]: E0113 20:44:17.792643 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:17.793161 kubelet[2663]: E0113 20:44:17.793004 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:18.163159 containerd[1485]: time="2025-01-13T20:44:18.163098651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:18.163888 containerd[1485]: time="2025-01-13T20:44:18.163854004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 20:44:18.165101 containerd[1485]: time="2025-01-13T20:44:18.165067936Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:18.167503 containerd[1485]: time="2025-01-13T20:44:18.167450235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:18.168001 containerd[1485]: time="2025-01-13T20:44:18.167965332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.240539229s" Jan 13 20:44:18.168001 containerd[1485]: time="2025-01-13T20:44:18.167991762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:44:18.168664 containerd[1485]: time="2025-01-13T20:44:18.168513361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:44:18.169533 containerd[1485]: time="2025-01-13T20:44:18.169505793Z" level=info msg="CreateContainer within sandbox \"612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:44:18.183626 containerd[1485]: time="2025-01-13T20:44:18.183591269Z" level=info msg="CreateContainer within sandbox \"612e07e6ac7c18a0ccabdc69392fe5d9c7e54e4446a63cc0445120b08609f201\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56ef74a56f5f0bc21418d56e88a7122eaf4dc634387cd62abb75281e3a9ece1c\"" Jan 13 20:44:18.184101 containerd[1485]: time="2025-01-13T20:44:18.184070629Z" level=info msg="StartContainer for \"56ef74a56f5f0bc21418d56e88a7122eaf4dc634387cd62abb75281e3a9ece1c\"" Jan 13 20:44:18.220165 systemd[1]: Started cri-containerd-56ef74a56f5f0bc21418d56e88a7122eaf4dc634387cd62abb75281e3a9ece1c.scope - libcontainer container 56ef74a56f5f0bc21418d56e88a7122eaf4dc634387cd62abb75281e3a9ece1c. Jan 13 20:44:18.263351 containerd[1485]: time="2025-01-13T20:44:18.263307525Z" level=info msg="StartContainer for \"56ef74a56f5f0bc21418d56e88a7122eaf4dc634387cd62abb75281e3a9ece1c\" returns successfully" Jan 13 20:44:18.796341 kubelet[2663]: E0113 20:44:18.796303 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:18.796837 kubelet[2663]: E0113 20:44:18.796286 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:19.080287 containerd[1485]: time="2025-01-13T20:44:19.080158052Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:19.100149 containerd[1485]: time="2025-01-13T20:44:19.099994997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:44:19.101881 containerd[1485]: time="2025-01-13T20:44:19.101853422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 933.305575ms" Jan 13 20:44:19.101881 containerd[1485]: time="2025-01-13T20:44:19.101881555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:44:19.102548 containerd[1485]: time="2025-01-13T20:44:19.102374710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:44:19.103409 containerd[1485]: time="2025-01-13T20:44:19.103349960Z" level=info msg="CreateContainer within sandbox \"e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:44:19.470233 containerd[1485]: time="2025-01-13T20:44:19.470180310Z" level=info msg="CreateContainer within sandbox \"e68a7d0c62ad38df494596ff72372876a9e2cef365026d9d9de9363de93642c4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"896aa91d39192e5257a9f3e19342a60f5d42d2be057020588ddd8d60ed4da202\"" Jan 13 20:44:19.472188 containerd[1485]: time="2025-01-13T20:44:19.471111585Z" level=info msg="StartContainer for \"896aa91d39192e5257a9f3e19342a60f5d42d2be057020588ddd8d60ed4da202\"" Jan 13 20:44:19.515168 systemd[1]: Started cri-containerd-896aa91d39192e5257a9f3e19342a60f5d42d2be057020588ddd8d60ed4da202.scope - libcontainer container 896aa91d39192e5257a9f3e19342a60f5d42d2be057020588ddd8d60ed4da202. Jan 13 20:44:19.560930 containerd[1485]: time="2025-01-13T20:44:19.560879387Z" level=info msg="StartContainer for \"896aa91d39192e5257a9f3e19342a60f5d42d2be057020588ddd8d60ed4da202\" returns successfully" Jan 13 20:44:19.800548 kubelet[2663]: I0113 20:44:19.800434 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:44:19.810211 kubelet[2663]: I0113 20:44:19.810168 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c845b497c-48828" podStartSLOduration=21.569046795 podStartE2EDuration="23.810121601s" podCreationTimestamp="2025-01-13 20:43:56 +0000 UTC" firstStartedPulling="2025-01-13 20:44:15.927209411 +0000 UTC m=+40.812405540" lastFinishedPulling="2025-01-13 20:44:18.168284217 +0000 UTC m=+43.053480346" observedRunningTime="2025-01-13 20:44:18.974844661 +0000 UTC m=+43.860040790" watchObservedRunningTime="2025-01-13 20:44:19.810121601 +0000 UTC m=+44.695317730" Jan 13 20:44:20.802208 kubelet[2663]: I0113 20:44:20.802173 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:44:20.991055 systemd[1]: Started sshd@10-10.0.0.148:22-10.0.0.1:34884.service - OpenSSH per-connection server daemon (10.0.0.1:34884). Jan 13 20:44:21.050412 sshd[5528]: Accepted publickey for core from 10.0.0.1 port 34884 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:21.052733 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:21.057678 systemd-logind[1473]: New session 11 of user core. Jan 13 20:44:21.064146 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:44:21.202437 sshd[5530]: Connection closed by 10.0.0.1 port 34884 Jan 13 20:44:21.202946 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:21.211472 systemd[1]: sshd@10-10.0.0.148:22-10.0.0.1:34884.service: Deactivated successfully. Jan 13 20:44:21.214172 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:44:21.216193 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:44:21.225347 systemd[1]: Started sshd@11-10.0.0.148:22-10.0.0.1:34890.service - OpenSSH per-connection server daemon (10.0.0.1:34890). Jan 13 20:44:21.226244 systemd-logind[1473]: Removed session 11. Jan 13 20:44:21.263259 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:21.264538 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:21.268311 systemd-logind[1473]: New session 12 of user core. Jan 13 20:44:21.277117 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:44:21.340146 containerd[1485]: time="2025-01-13T20:44:21.339843027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:21.341603 containerd[1485]: time="2025-01-13T20:44:21.341300700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:44:21.343031 containerd[1485]: time="2025-01-13T20:44:21.342975103Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:21.346184 containerd[1485]: time="2025-01-13T20:44:21.345918934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:21.347378 containerd[1485]: time="2025-01-13T20:44:21.347345668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.244933097s" Jan 13 20:44:21.347436 containerd[1485]: time="2025-01-13T20:44:21.347392968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:44:21.347940 containerd[1485]: time="2025-01-13T20:44:21.347913725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:44:21.357853 containerd[1485]: time="2025-01-13T20:44:21.357784685Z" level=info msg="CreateContainer within sandbox \"e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:44:21.377288 containerd[1485]: time="2025-01-13T20:44:21.377237637Z" level=info msg="CreateContainer within sandbox \"e07da9b4eda2db2958a4f0d86385ec7e0e3d20d48e2a0094d9fe365901679384\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a97eaa403b5b5b81ed9dfdeefd6df01875e335bbe55eb6d5a59cf48e13a4e2e3\"" Jan 13 20:44:21.378741 containerd[1485]: time="2025-01-13T20:44:21.377725071Z" level=info msg="StartContainer for \"a97eaa403b5b5b81ed9dfdeefd6df01875e335bbe55eb6d5a59cf48e13a4e2e3\"" Jan 13 20:44:21.426286 systemd[1]: Started cri-containerd-a97eaa403b5b5b81ed9dfdeefd6df01875e335bbe55eb6d5a59cf48e13a4e2e3.scope - libcontainer container a97eaa403b5b5b81ed9dfdeefd6df01875e335bbe55eb6d5a59cf48e13a4e2e3. Jan 13 20:44:21.428507 sshd[5545]: Connection closed by 10.0.0.1 port 34890 Jan 13 20:44:21.429357 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:21.437468 systemd[1]: sshd@11-10.0.0.148:22-10.0.0.1:34890.service: Deactivated successfully. Jan 13 20:44:21.442234 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:44:21.443916 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:44:21.452387 systemd[1]: Started sshd@12-10.0.0.148:22-10.0.0.1:34906.service - OpenSSH per-connection server daemon (10.0.0.1:34906). Jan 13 20:44:21.457483 systemd-logind[1473]: Removed session 12. Jan 13 20:44:21.528877 containerd[1485]: time="2025-01-13T20:44:21.528824303Z" level=info msg="StartContainer for \"a97eaa403b5b5b81ed9dfdeefd6df01875e335bbe55eb6d5a59cf48e13a4e2e3\" returns successfully" Jan 13 20:44:21.551223 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 34906 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:21.553304 sshd-session[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:21.557813 systemd-logind[1473]: New session 13 of user core. Jan 13 20:44:21.565256 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:44:21.697614 sshd[5598]: Connection closed by 10.0.0.1 port 34906 Jan 13 20:44:21.698099 sshd-session[5583]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:21.702038 systemd[1]: sshd@12-10.0.0.148:22-10.0.0.1:34906.service: Deactivated successfully. Jan 13 20:44:21.704583 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:44:21.706875 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:44:21.708168 systemd-logind[1473]: Removed session 13. Jan 13 20:44:21.849160 kubelet[2663]: I0113 20:44:21.848907 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69dc874945-vclf2" podStartSLOduration=20.520673199 podStartE2EDuration="25.848867273s" podCreationTimestamp="2025-01-13 20:43:56 +0000 UTC" firstStartedPulling="2025-01-13 20:44:16.019548937 +0000 UTC m=+40.904745066" lastFinishedPulling="2025-01-13 20:44:21.347743011 +0000 UTC m=+46.232939140" observedRunningTime="2025-01-13 20:44:21.848171564 +0000 UTC m=+46.733367693" watchObservedRunningTime="2025-01-13 20:44:21.848867273 +0000 UTC m=+46.734063402" Jan 13 20:44:21.849160 kubelet[2663]: I0113 20:44:21.848981 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c845b497c-psj2v" podStartSLOduration=22.73072361 podStartE2EDuration="25.848965208s" podCreationTimestamp="2025-01-13 20:43:56 +0000 UTC" firstStartedPulling="2025-01-13 20:44:15.983920859 +0000 UTC m=+40.869116988" lastFinishedPulling="2025-01-13 20:44:19.102162457 +0000 UTC m=+43.987358586" observedRunningTime="2025-01-13 20:44:19.810394048 +0000 UTC m=+44.695590177" watchObservedRunningTime="2025-01-13 20:44:21.848965208 +0000 UTC m=+46.734161337" Jan 13 20:44:24.801986 containerd[1485]: time="2025-01-13T20:44:24.801916896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:24.802876 containerd[1485]: time="2025-01-13T20:44:24.802818504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:44:24.805354 containerd[1485]: time="2025-01-13T20:44:24.805315053Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:24.807568 containerd[1485]: time="2025-01-13T20:44:24.807535790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:24.808225 containerd[1485]: time="2025-01-13T20:44:24.808188918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.460248402s" Jan 13 20:44:24.808225 containerd[1485]: time="2025-01-13T20:44:24.808222572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:44:24.810310 containerd[1485]: time="2025-01-13T20:44:24.810262777Z" level=info msg="CreateContainer within sandbox \"f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:44:24.843833 containerd[1485]: time="2025-01-13T20:44:24.843780975Z" level=info msg="CreateContainer within sandbox \"f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c508dc721d0bab750f4ffdffd4d3f0efd5ae0bd8191373284e7c352736451a89\"" Jan 13 20:44:24.844587 containerd[1485]: time="2025-01-13T20:44:24.844502412Z" level=info msg="StartContainer for \"c508dc721d0bab750f4ffdffd4d3f0efd5ae0bd8191373284e7c352736451a89\"" Jan 13 20:44:24.881224 systemd[1]: Started cri-containerd-c508dc721d0bab750f4ffdffd4d3f0efd5ae0bd8191373284e7c352736451a89.scope - libcontainer container c508dc721d0bab750f4ffdffd4d3f0efd5ae0bd8191373284e7c352736451a89. Jan 13 20:44:24.921300 containerd[1485]: time="2025-01-13T20:44:24.920923990Z" level=info msg="StartContainer for \"c508dc721d0bab750f4ffdffd4d3f0efd5ae0bd8191373284e7c352736451a89\" returns successfully" Jan 13 20:44:24.924550 containerd[1485]: time="2025-01-13T20:44:24.924509172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:44:25.930703 kubelet[2663]: I0113 20:44:25.930664 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:44:25.931464 kubelet[2663]: E0113 20:44:25.931440 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:26.565700 containerd[1485]: time="2025-01-13T20:44:26.565636536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:26.566517 containerd[1485]: time="2025-01-13T20:44:26.566446119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:44:26.567954 containerd[1485]: time="2025-01-13T20:44:26.567927785Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:26.570706 containerd[1485]: time="2025-01-13T20:44:26.570666723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:26.571478 containerd[1485]: time="2025-01-13T20:44:26.571420812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.646866453s" Jan 13 20:44:26.571516 containerd[1485]: time="2025-01-13T20:44:26.571477279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:44:26.573421 containerd[1485]: time="2025-01-13T20:44:26.573366315Z" level=info msg="CreateContainer within sandbox \"f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:44:26.589839 containerd[1485]: time="2025-01-13T20:44:26.589789910Z" level=info msg="CreateContainer within sandbox \"f571d5ff1d80f7919331f7c4ce6757ce989e9cb4bd7a1dfdf7facb218aa067f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8c91bbe7d67b0d3e66b8cf6bcd7f25038ec29760b66c56c1730f97ceba81505d\"" Jan 13 20:44:26.590493 containerd[1485]: time="2025-01-13T20:44:26.590439591Z" level=info msg="StartContainer for \"8c91bbe7d67b0d3e66b8cf6bcd7f25038ec29760b66c56c1730f97ceba81505d\"" Jan 13 20:44:26.622152 systemd[1]: Started cri-containerd-8c91bbe7d67b0d3e66b8cf6bcd7f25038ec29760b66c56c1730f97ceba81505d.scope - libcontainer container 8c91bbe7d67b0d3e66b8cf6bcd7f25038ec29760b66c56c1730f97ceba81505d. Jan 13 20:44:26.657517 containerd[1485]: time="2025-01-13T20:44:26.657457986Z" level=info msg="StartContainer for \"8c91bbe7d67b0d3e66b8cf6bcd7f25038ec29760b66c56c1730f97ceba81505d\" returns successfully" Jan 13 20:44:26.718164 systemd[1]: Started sshd@13-10.0.0.148:22-10.0.0.1:34918.service - OpenSSH per-connection server daemon (10.0.0.1:34918). Jan 13 20:44:26.768298 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 34918 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:26.769852 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:26.774250 systemd-logind[1473]: New session 14 of user core. Jan 13 20:44:26.781148 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:44:26.896820 kubelet[2663]: I0113 20:44:26.896763 2663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-74fml" podStartSLOduration=20.567884108 podStartE2EDuration="30.896715764s" podCreationTimestamp="2025-01-13 20:43:56 +0000 UTC" firstStartedPulling="2025-01-13 20:44:16.242897819 +0000 UTC m=+41.128093949" lastFinishedPulling="2025-01-13 20:44:26.571729476 +0000 UTC m=+51.456925605" observedRunningTime="2025-01-13 20:44:26.894880269 +0000 UTC m=+51.780076398" watchObservedRunningTime="2025-01-13 20:44:26.896715764 +0000 UTC m=+51.781911903" Jan 13 20:44:26.904288 sshd[5784]: Connection closed by 10.0.0.1 port 34918 Jan 13 20:44:26.904683 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:26.911729 systemd[1]: sshd@13-10.0.0.148:22-10.0.0.1:34918.service: Deactivated successfully. Jan 13 20:44:26.914198 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:44:26.914965 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:44:26.916064 systemd-logind[1473]: Removed session 14. Jan 13 20:44:27.287105 kubelet[2663]: I0113 20:44:27.286971 2663 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:44:27.287907 kubelet[2663]: I0113 20:44:27.287883 2663 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:44:31.916910 systemd[1]: Started sshd@14-10.0.0.148:22-10.0.0.1:58952.service - OpenSSH per-connection server daemon (10.0.0.1:58952). Jan 13 20:44:31.961123 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 58952 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:31.962579 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:31.966625 systemd-logind[1473]: New session 15 of user core. Jan 13 20:44:31.974146 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:44:32.096191 sshd[5802]: Connection closed by 10.0.0.1 port 58952 Jan 13 20:44:32.097272 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:32.105932 systemd[1]: sshd@14-10.0.0.148:22-10.0.0.1:58952.service: Deactivated successfully. Jan 13 20:44:32.107616 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:44:32.109314 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:44:32.116251 systemd[1]: Started sshd@15-10.0.0.148:22-10.0.0.1:58962.service - OpenSSH per-connection server daemon (10.0.0.1:58962). Jan 13 20:44:32.117159 systemd-logind[1473]: Removed session 15. Jan 13 20:44:32.156784 sshd[5814]: Accepted publickey for core from 10.0.0.1 port 58962 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:32.158087 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:32.162090 systemd-logind[1473]: New session 16 of user core. Jan 13 20:44:32.168195 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:44:32.350126 sshd[5816]: Connection closed by 10.0.0.1 port 58962 Jan 13 20:44:32.350545 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:32.363008 systemd[1]: sshd@15-10.0.0.148:22-10.0.0.1:58962.service: Deactivated successfully. Jan 13 20:44:32.364882 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:44:32.366566 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:44:32.367911 systemd[1]: Started sshd@16-10.0.0.148:22-10.0.0.1:58976.service - OpenSSH per-connection server daemon (10.0.0.1:58976). Jan 13 20:44:32.368706 systemd-logind[1473]: Removed session 16. Jan 13 20:44:32.416226 sshd[5827]: Accepted publickey for core from 10.0.0.1 port 58976 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:32.417679 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:32.421463 systemd-logind[1473]: New session 17 of user core. Jan 13 20:44:32.429131 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:44:34.036687 sshd[5829]: Connection closed by 10.0.0.1 port 58976 Jan 13 20:44:34.037137 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:34.047642 systemd[1]: sshd@16-10.0.0.148:22-10.0.0.1:58976.service: Deactivated successfully. Jan 13 20:44:34.050136 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:44:34.050876 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:44:34.065334 systemd[1]: Started sshd@17-10.0.0.148:22-10.0.0.1:58986.service - OpenSSH per-connection server daemon (10.0.0.1:58986). Jan 13 20:44:34.066503 systemd-logind[1473]: Removed session 17. Jan 13 20:44:34.101636 sshd[5847]: Accepted publickey for core from 10.0.0.1 port 58986 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:34.103799 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:34.108669 systemd-logind[1473]: New session 18 of user core. Jan 13 20:44:34.114142 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:44:34.559283 sshd[5849]: Connection closed by 10.0.0.1 port 58986 Jan 13 20:44:34.559635 sshd-session[5847]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:34.577039 systemd[1]: sshd@17-10.0.0.148:22-10.0.0.1:58986.service: Deactivated successfully. Jan 13 20:44:34.579792 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:44:34.581841 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:44:34.590700 systemd[1]: Started sshd@18-10.0.0.148:22-10.0.0.1:58998.service - OpenSSH per-connection server daemon (10.0.0.1:58998). Jan 13 20:44:34.591620 systemd-logind[1473]: Removed session 18. Jan 13 20:44:34.626104 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 58998 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:34.627751 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:34.631862 systemd-logind[1473]: New session 19 of user core. Jan 13 20:44:34.645141 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:44:34.761099 sshd[5861]: Connection closed by 10.0.0.1 port 58998 Jan 13 20:44:34.761502 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:34.765502 systemd[1]: sshd@18-10.0.0.148:22-10.0.0.1:58998.service: Deactivated successfully. Jan 13 20:44:34.767619 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:44:34.768299 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:44:34.769192 systemd-logind[1473]: Removed session 19. Jan 13 20:44:35.200127 containerd[1485]: time="2025-01-13T20:44:35.200068782Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:35.200641 containerd[1485]: time="2025-01-13T20:44:35.200203086Z" level=info msg="TearDown network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" successfully" Jan 13 20:44:35.200641 containerd[1485]: time="2025-01-13T20:44:35.200213776Z" level=info msg="StopPodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" returns successfully" Jan 13 20:44:35.200804 containerd[1485]: time="2025-01-13T20:44:35.200779096Z" level=info msg="RemovePodSandbox for \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:35.211421 containerd[1485]: time="2025-01-13T20:44:35.211364945Z" level=info msg="Forcibly stopping sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\"" Jan 13 20:44:35.211606 containerd[1485]: time="2025-01-13T20:44:35.211547200Z" level=info msg="TearDown network for sandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" successfully" Jan 13 20:44:35.251244 containerd[1485]: time="2025-01-13T20:44:35.251181260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.251380 containerd[1485]: time="2025-01-13T20:44:35.251278935Z" level=info msg="RemovePodSandbox \"fca94a71dd9821b7f3a15aedfcd61f1515b3f94e0d263b5c0d6c76e59b4c189c\" returns successfully" Jan 13 20:44:35.251856 containerd[1485]: time="2025-01-13T20:44:35.251818786Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" Jan 13 20:44:35.252000 containerd[1485]: time="2025-01-13T20:44:35.251965453Z" level=info msg="TearDown network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" successfully" Jan 13 20:44:35.252000 containerd[1485]: time="2025-01-13T20:44:35.251988979Z" level=info msg="StopPodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" returns successfully" Jan 13 20:44:35.252413 containerd[1485]: time="2025-01-13T20:44:35.252386992Z" level=info msg="RemovePodSandbox for \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" Jan 13 20:44:35.252461 containerd[1485]: time="2025-01-13T20:44:35.252415876Z" level=info msg="Forcibly stopping sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\"" Jan 13 20:44:35.252621 containerd[1485]: time="2025-01-13T20:44:35.252495366Z" level=info msg="TearDown network for sandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" successfully" Jan 13 20:44:35.257445 containerd[1485]: time="2025-01-13T20:44:35.257401339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.257516 containerd[1485]: time="2025-01-13T20:44:35.257494625Z" level=info msg="RemovePodSandbox \"8b86a1e428678366d575a7fdec564b6a22039f31308699ac585a9367365c6e98\" returns successfully" Jan 13 20:44:35.260198 containerd[1485]: time="2025-01-13T20:44:35.260161632Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\"" Jan 13 20:44:35.260351 containerd[1485]: time="2025-01-13T20:44:35.260280497Z" level=info msg="TearDown network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" successfully" Jan 13 20:44:35.260351 containerd[1485]: time="2025-01-13T20:44:35.260295745Z" level=info msg="StopPodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" returns successfully" Jan 13 20:44:35.260745 containerd[1485]: time="2025-01-13T20:44:35.260711672Z" level=info msg="RemovePodSandbox for \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\"" Jan 13 20:44:35.260795 containerd[1485]: time="2025-01-13T20:44:35.260748903Z" level=info msg="Forcibly stopping sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\"" Jan 13 20:44:35.260971 containerd[1485]: time="2025-01-13T20:44:35.260848140Z" level=info msg="TearDown network for sandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" successfully" Jan 13 20:44:35.266632 containerd[1485]: time="2025-01-13T20:44:35.266004187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.266758 containerd[1485]: time="2025-01-13T20:44:35.266653114Z" level=info msg="RemovePodSandbox \"79612f079b0528366e78720de00a22ba16aa9a0a8a388fad8a6867c45c4873b1\" returns successfully" Jan 13 20:44:35.267311 containerd[1485]: time="2025-01-13T20:44:35.266975063Z" level=info msg="StopPodSandbox for \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\"" Jan 13 20:44:35.267311 containerd[1485]: time="2025-01-13T20:44:35.267083438Z" level=info msg="TearDown network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" successfully" Jan 13 20:44:35.267311 containerd[1485]: time="2025-01-13T20:44:35.267095330Z" level=info msg="StopPodSandbox for \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" returns successfully" Jan 13 20:44:35.267449 containerd[1485]: time="2025-01-13T20:44:35.267425045Z" level=info msg="RemovePodSandbox for \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\"" Jan 13 20:44:35.267481 containerd[1485]: time="2025-01-13T20:44:35.267458507Z" level=info msg="Forcibly stopping sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\"" Jan 13 20:44:35.267636 containerd[1485]: time="2025-01-13T20:44:35.267586510Z" level=info msg="TearDown network for sandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" successfully" Jan 13 20:44:35.271964 containerd[1485]: time="2025-01-13T20:44:35.271929347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.272046 containerd[1485]: time="2025-01-13T20:44:35.271988339Z" level=info msg="RemovePodSandbox \"8ae0c57588775cf76e07a6d34ab555c1edda75937db11c955f3054ef926e83a3\" returns successfully" Jan 13 20:44:35.272303 containerd[1485]: time="2025-01-13T20:44:35.272256967Z" level=info msg="StopPodSandbox for \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\"" Jan 13 20:44:35.272433 containerd[1485]: time="2025-01-13T20:44:35.272366864Z" level=info msg="TearDown network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\" successfully" Jan 13 20:44:35.272433 containerd[1485]: time="2025-01-13T20:44:35.272381773Z" level=info msg="StopPodSandbox for \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\" returns successfully" Jan 13 20:44:35.272637 containerd[1485]: time="2025-01-13T20:44:35.272612420Z" level=info msg="RemovePodSandbox for \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\"" Jan 13 20:44:35.272687 containerd[1485]: time="2025-01-13T20:44:35.272639380Z" level=info msg="Forcibly stopping sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\"" Jan 13 20:44:35.272763 containerd[1485]: time="2025-01-13T20:44:35.272719622Z" level=info msg="TearDown network for sandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\" successfully" Jan 13 20:44:35.276400 containerd[1485]: time="2025-01-13T20:44:35.276363898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.276500 containerd[1485]: time="2025-01-13T20:44:35.276405115Z" level=info msg="RemovePodSandbox \"e76cbc9e9dd31f5308734315c0b18e0950bad1800b75935661c0dda360ec641a\" returns successfully" Jan 13 20:44:35.276678 containerd[1485]: time="2025-01-13T20:44:35.276645521Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:35.276778 containerd[1485]: time="2025-01-13T20:44:35.276742313Z" level=info msg="TearDown network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" successfully" Jan 13 20:44:35.276778 containerd[1485]: time="2025-01-13T20:44:35.276762161Z" level=info msg="StopPodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" returns successfully" Jan 13 20:44:35.277198 containerd[1485]: time="2025-01-13T20:44:35.277161747Z" level=info msg="RemovePodSandbox for \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:35.277198 containerd[1485]: time="2025-01-13T20:44:35.277197986Z" level=info msg="Forcibly stopping sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\"" Jan 13 20:44:35.277328 containerd[1485]: time="2025-01-13T20:44:35.277290671Z" level=info msg="TearDown network for sandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" successfully" Jan 13 20:44:35.281054 containerd[1485]: time="2025-01-13T20:44:35.281026739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.281128 containerd[1485]: time="2025-01-13T20:44:35.281074470Z" level=info msg="RemovePodSandbox \"3a6b754c2ec9649b0c2582369b8bde3655bb8cd9d6230cd33a565942b8e7dd5b\" returns successfully" Jan 13 20:44:35.281422 containerd[1485]: time="2025-01-13T20:44:35.281397271Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" Jan 13 20:44:35.281503 containerd[1485]: time="2025-01-13T20:44:35.281483524Z" level=info msg="TearDown network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" successfully" Jan 13 20:44:35.281503 containerd[1485]: time="2025-01-13T20:44:35.281500447Z" level=info msg="StopPodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" returns successfully" Jan 13 20:44:35.283051 containerd[1485]: time="2025-01-13T20:44:35.281807748Z" level=info msg="RemovePodSandbox for \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" Jan 13 20:44:35.283051 containerd[1485]: time="2025-01-13T20:44:35.281844688Z" level=info msg="Forcibly stopping sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\"" Jan 13 20:44:35.283051 containerd[1485]: time="2025-01-13T20:44:35.281936762Z" level=info msg="TearDown network for sandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" successfully" Jan 13 20:44:35.285916 containerd[1485]: time="2025-01-13T20:44:35.285884251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.285996 containerd[1485]: time="2025-01-13T20:44:35.285927993Z" level=info msg="RemovePodSandbox \"a067c68ad121c3801dd103213443f6b6b7f4a1ccf09284b29ee7aa298c3d28c0\" returns successfully" Jan 13 20:44:35.286301 containerd[1485]: time="2025-01-13T20:44:35.286270952Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\"" Jan 13 20:44:35.286416 containerd[1485]: time="2025-01-13T20:44:35.286378726Z" level=info msg="TearDown network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" successfully" Jan 13 20:44:35.286416 containerd[1485]: time="2025-01-13T20:44:35.286390178Z" level=info msg="StopPodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" returns successfully" Jan 13 20:44:35.286709 containerd[1485]: time="2025-01-13T20:44:35.286682521Z" level=info msg="RemovePodSandbox for \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\"" Jan 13 20:44:35.286763 containerd[1485]: time="2025-01-13T20:44:35.286708280Z" level=info msg="Forcibly stopping sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\"" Jan 13 20:44:35.286832 containerd[1485]: time="2025-01-13T20:44:35.286789143Z" level=info msg="TearDown network for sandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" successfully" Jan 13 20:44:35.290306 containerd[1485]: time="2025-01-13T20:44:35.290281040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.290412 containerd[1485]: time="2025-01-13T20:44:35.290311808Z" level=info msg="RemovePodSandbox \"7b691981f4524e429e62b87851bd03a3f6dfb9ee0fa01d5db6cbe0da93f5fac8\" returns successfully" Jan 13 20:44:35.290617 containerd[1485]: time="2025-01-13T20:44:35.290590956Z" level=info msg="StopPodSandbox for \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\"" Jan 13 20:44:35.290720 containerd[1485]: time="2025-01-13T20:44:35.290695554Z" level=info msg="TearDown network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" successfully" Jan 13 20:44:35.290720 containerd[1485]: time="2025-01-13T20:44:35.290712326Z" level=info msg="StopPodSandbox for \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" returns successfully" Jan 13 20:44:35.291038 containerd[1485]: time="2025-01-13T20:44:35.290991564Z" level=info msg="RemovePodSandbox for \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\"" Jan 13 20:44:35.291124 containerd[1485]: time="2025-01-13T20:44:35.291082136Z" level=info msg="Forcibly stopping sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\"" Jan 13 20:44:35.291238 containerd[1485]: time="2025-01-13T20:44:35.291198346Z" level=info msg="TearDown network for sandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" successfully" Jan 13 20:44:35.294981 containerd[1485]: time="2025-01-13T20:44:35.294946758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.295069 containerd[1485]: time="2025-01-13T20:44:35.294990720Z" level=info msg="RemovePodSandbox \"028a5f7ffc969c4719ce9fdec80139aabcce0639c4c0f45843e74dc6d5059be0\" returns successfully" Jan 13 20:44:35.295371 containerd[1485]: time="2025-01-13T20:44:35.295330654Z" level=info msg="StopPodSandbox for \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\"" Jan 13 20:44:35.295459 containerd[1485]: time="2025-01-13T20:44:35.295421887Z" level=info msg="TearDown network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\" successfully" Jan 13 20:44:35.295459 containerd[1485]: time="2025-01-13T20:44:35.295442235Z" level=info msg="StopPodSandbox for \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\" returns successfully" Jan 13 20:44:35.295760 containerd[1485]: time="2025-01-13T20:44:35.295725301Z" level=info msg="RemovePodSandbox for \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\"" Jan 13 20:44:35.295760 containerd[1485]: time="2025-01-13T20:44:35.295758945Z" level=info msg="Forcibly stopping sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\"" Jan 13 20:44:35.295952 containerd[1485]: time="2025-01-13T20:44:35.295903508Z" level=info msg="TearDown network for sandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\" successfully" Jan 13 20:44:35.299828 containerd[1485]: time="2025-01-13T20:44:35.299793619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.299887 containerd[1485]: time="2025-01-13T20:44:35.299846950Z" level=info msg="RemovePodSandbox \"a4c16220632a2f833857041ef6bc9e795e06307ebfb34c5eff10cbd29d7c6872\" returns successfully" Jan 13 20:44:35.300233 containerd[1485]: time="2025-01-13T20:44:35.300198825Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:35.300315 containerd[1485]: time="2025-01-13T20:44:35.300303382Z" level=info msg="TearDown network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" successfully" Jan 13 20:44:35.300431 containerd[1485]: time="2025-01-13T20:44:35.300318562Z" level=info msg="StopPodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" returns successfully" Jan 13 20:44:35.300614 containerd[1485]: time="2025-01-13T20:44:35.300586328Z" level=info msg="RemovePodSandbox for \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:35.300614 containerd[1485]: time="2025-01-13T20:44:35.300610324Z" level=info msg="Forcibly stopping sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\"" Jan 13 20:44:35.300717 containerd[1485]: time="2025-01-13T20:44:35.300683663Z" level=info msg="TearDown network for sandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" successfully" Jan 13 20:44:35.304408 containerd[1485]: time="2025-01-13T20:44:35.304363956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.304460 containerd[1485]: time="2025-01-13T20:44:35.304413330Z" level=info msg="RemovePodSandbox \"d4b2ae79a6bf6836c81c71823cb5f39395d6545a0971b047e0026b182afc43f5\" returns successfully" Jan 13 20:44:35.304760 containerd[1485]: time="2025-01-13T20:44:35.304718577Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" Jan 13 20:44:35.304910 containerd[1485]: time="2025-01-13T20:44:35.304819658Z" level=info msg="TearDown network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" successfully" Jan 13 20:44:35.304910 containerd[1485]: time="2025-01-13T20:44:35.304834416Z" level=info msg="StopPodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" returns successfully" Jan 13 20:44:35.305184 containerd[1485]: time="2025-01-13T20:44:35.305158288Z" level=info msg="RemovePodSandbox for \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" Jan 13 20:44:35.305230 containerd[1485]: time="2025-01-13T20:44:35.305184599Z" level=info msg="Forcibly stopping sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\"" Jan 13 20:44:35.305295 containerd[1485]: time="2025-01-13T20:44:35.305259169Z" level=info msg="TearDown network for sandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" successfully" Jan 13 20:44:35.308778 containerd[1485]: time="2025-01-13T20:44:35.308742270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.308778 containerd[1485]: time="2025-01-13T20:44:35.308779610Z" level=info msg="RemovePodSandbox \"1cee129c088af3c06c0091d80e14aabff3596dab54cd0756ec2ef645236a902e\" returns successfully" Jan 13 20:44:35.309125 containerd[1485]: time="2025-01-13T20:44:35.309080349Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\"" Jan 13 20:44:35.309216 containerd[1485]: time="2025-01-13T20:44:35.309198474Z" level=info msg="TearDown network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" successfully" Jan 13 20:44:35.309216 containerd[1485]: time="2025-01-13T20:44:35.309212810Z" level=info msg="StopPodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" returns successfully" Jan 13 20:44:35.309510 containerd[1485]: time="2025-01-13T20:44:35.309468834Z" level=info msg="RemovePodSandbox for \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\"" Jan 13 20:44:35.309510 containerd[1485]: time="2025-01-13T20:44:35.309499453Z" level=info msg="Forcibly stopping sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\"" Jan 13 20:44:35.309626 containerd[1485]: time="2025-01-13T20:44:35.309580937Z" level=info msg="TearDown network for sandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" successfully" Jan 13 20:44:35.313681 containerd[1485]: time="2025-01-13T20:44:35.313645988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.313723 containerd[1485]: time="2025-01-13T20:44:35.313698016Z" level=info msg="RemovePodSandbox \"242440afe550347fef10b1a5c8e28779be543825762b7267a2f0fcf2d59a183f\" returns successfully" Jan 13 20:44:35.314003 containerd[1485]: time="2025-01-13T20:44:35.313968879Z" level=info msg="StopPodSandbox for \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\"" Jan 13 20:44:35.314122 containerd[1485]: time="2025-01-13T20:44:35.314080560Z" level=info msg="TearDown network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" successfully" Jan 13 20:44:35.314122 containerd[1485]: time="2025-01-13T20:44:35.314096290Z" level=info msg="StopPodSandbox for \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" returns successfully" Jan 13 20:44:35.314363 containerd[1485]: time="2025-01-13T20:44:35.314323720Z" level=info msg="RemovePodSandbox for \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\"" Jan 13 20:44:35.314363 containerd[1485]: time="2025-01-13T20:44:35.314357004Z" level=info msg="Forcibly stopping sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\"" Jan 13 20:44:35.314460 containerd[1485]: time="2025-01-13T20:44:35.314420514Z" level=info msg="TearDown network for sandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" successfully" Jan 13 20:44:35.318469 containerd[1485]: time="2025-01-13T20:44:35.318436492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.318500 containerd[1485]: time="2025-01-13T20:44:35.318489161Z" level=info msg="RemovePodSandbox \"96ff5e224fda7c2c9f584dfe0596212b3cb96520784053cfe1b2c8cccfd261c9\" returns successfully" Jan 13 20:44:35.318825 containerd[1485]: time="2025-01-13T20:44:35.318788989Z" level=info msg="StopPodSandbox for \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\"" Jan 13 20:44:35.318901 containerd[1485]: time="2025-01-13T20:44:35.318864803Z" level=info msg="TearDown network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\" successfully" Jan 13 20:44:35.318901 containerd[1485]: time="2025-01-13T20:44:35.318881504Z" level=info msg="StopPodSandbox for \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\" returns successfully" Jan 13 20:44:35.319160 containerd[1485]: time="2025-01-13T20:44:35.319132949Z" level=info msg="RemovePodSandbox for \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\"" Jan 13 20:44:35.319160 containerd[1485]: time="2025-01-13T20:44:35.319158448Z" level=info msg="Forcibly stopping sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\"" Jan 13 20:44:35.319307 containerd[1485]: time="2025-01-13T20:44:35.319267144Z" level=info msg="TearDown network for sandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\" successfully" Jan 13 20:44:35.322965 containerd[1485]: time="2025-01-13T20:44:35.322933480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.323035 containerd[1485]: time="2025-01-13T20:44:35.322981261Z" level=info msg="RemovePodSandbox \"1b14fbca89ec15e87cb76e5d1f906b097f607dfe40cd5845cbd8dfa7dd58a0a9\" returns successfully" Jan 13 20:44:35.323326 containerd[1485]: time="2025-01-13T20:44:35.323299704Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:35.323405 containerd[1485]: time="2025-01-13T20:44:35.323393541Z" level=info msg="TearDown network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" successfully" Jan 13 20:44:35.323466 containerd[1485]: time="2025-01-13T20:44:35.323403531Z" level=info msg="StopPodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" returns successfully" Jan 13 20:44:35.323638 containerd[1485]: time="2025-01-13T20:44:35.323616703Z" level=info msg="RemovePodSandbox for \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:35.323678 containerd[1485]: time="2025-01-13T20:44:35.323640659Z" level=info msg="Forcibly stopping sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\"" Jan 13 20:44:35.323740 containerd[1485]: time="2025-01-13T20:44:35.323711172Z" level=info msg="TearDown network for sandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" successfully" Jan 13 20:44:35.327267 containerd[1485]: time="2025-01-13T20:44:35.327235570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.327319 containerd[1485]: time="2025-01-13T20:44:35.327283541Z" level=info msg="RemovePodSandbox \"814c4037f948a0c08c59c5dd9606658fe5c5be763e63f3b8afa256dfdc73527b\" returns successfully" Jan 13 20:44:35.327554 containerd[1485]: time="2025-01-13T20:44:35.327526621Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" Jan 13 20:44:35.327679 containerd[1485]: time="2025-01-13T20:44:35.327623294Z" level=info msg="TearDown network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" successfully" Jan 13 20:44:35.327679 containerd[1485]: time="2025-01-13T20:44:35.327670895Z" level=info msg="StopPodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" returns successfully" Jan 13 20:44:35.327947 containerd[1485]: time="2025-01-13T20:44:35.327921438Z" level=info msg="RemovePodSandbox for \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" Jan 13 20:44:35.327947 containerd[1485]: time="2025-01-13T20:44:35.327940905Z" level=info msg="Forcibly stopping sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\"" Jan 13 20:44:35.328057 containerd[1485]: time="2025-01-13T20:44:35.328009174Z" level=info msg="TearDown network for sandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" successfully" Jan 13 20:44:35.333042 containerd[1485]: time="2025-01-13T20:44:35.332817172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.333042 containerd[1485]: time="2025-01-13T20:44:35.332919835Z" level=info msg="RemovePodSandbox \"e35e7309fdb99a57a14efd416c823fe916284fd70e17f5e72b97139b7f939366\" returns successfully" Jan 13 20:44:35.333360 containerd[1485]: time="2025-01-13T20:44:35.333327927Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\"" Jan 13 20:44:35.333472 containerd[1485]: time="2025-01-13T20:44:35.333444678Z" level=info msg="TearDown network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" successfully" Jan 13 20:44:35.333500 containerd[1485]: time="2025-01-13T20:44:35.333476168Z" level=info msg="StopPodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" returns successfully" Jan 13 20:44:35.334117 containerd[1485]: time="2025-01-13T20:44:35.334048622Z" level=info msg="RemovePodSandbox for \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\"" Jan 13 20:44:35.334117 containerd[1485]: time="2025-01-13T20:44:35.334071334Z" level=info msg="Forcibly stopping sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\"" Jan 13 20:44:35.334490 containerd[1485]: time="2025-01-13T20:44:35.334174991Z" level=info msg="TearDown network for sandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" successfully" Jan 13 20:44:35.337897 containerd[1485]: time="2025-01-13T20:44:35.337854633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.337897 containerd[1485]: time="2025-01-13T20:44:35.337895069Z" level=info msg="RemovePodSandbox \"fa6a135f7134e07ca8ffcb48c6d64bd7ba95356efe29bf6763af120f980c3d44\" returns successfully" Jan 13 20:44:35.338225 containerd[1485]: time="2025-01-13T20:44:35.338169318Z" level=info msg="StopPodSandbox for \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\"" Jan 13 20:44:35.338381 containerd[1485]: time="2025-01-13T20:44:35.338256844Z" level=info msg="TearDown network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" successfully" Jan 13 20:44:35.338381 containerd[1485]: time="2025-01-13T20:44:35.338292701Z" level=info msg="StopPodSandbox for \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" returns successfully" Jan 13 20:44:35.338537 containerd[1485]: time="2025-01-13T20:44:35.338506015Z" level=info msg="RemovePodSandbox for \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\"" Jan 13 20:44:35.338537 containerd[1485]: time="2025-01-13T20:44:35.338533547Z" level=info msg="Forcibly stopping sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\"" Jan 13 20:44:35.338667 containerd[1485]: time="2025-01-13T20:44:35.338620091Z" level=info msg="TearDown network for sandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" successfully" Jan 13 20:44:35.342395 containerd[1485]: time="2025-01-13T20:44:35.342366159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.342446 containerd[1485]: time="2025-01-13T20:44:35.342410122Z" level=info msg="RemovePodSandbox \"c0d213634f50a5a3429de47066f03afd7cc939de434527481d78c1ea14f42b8f\" returns successfully" Jan 13 20:44:35.342661 containerd[1485]: time="2025-01-13T20:44:35.342633836Z" level=info msg="StopPodSandbox for \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\"" Jan 13 20:44:35.342745 containerd[1485]: time="2025-01-13T20:44:35.342714047Z" level=info msg="TearDown network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\" successfully" Jan 13 20:44:35.342745 containerd[1485]: time="2025-01-13T20:44:35.342741218Z" level=info msg="StopPodSandbox for \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\" returns successfully" Jan 13 20:44:35.343633 containerd[1485]: time="2025-01-13T20:44:35.343047819Z" level=info msg="RemovePodSandbox for \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\"" Jan 13 20:44:35.343633 containerd[1485]: time="2025-01-13T20:44:35.343071834Z" level=info msg="Forcibly stopping sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\"" Jan 13 20:44:35.343633 containerd[1485]: time="2025-01-13T20:44:35.343144281Z" level=info msg="TearDown network for sandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\" successfully" Jan 13 20:44:35.346629 containerd[1485]: time="2025-01-13T20:44:35.346595180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.346667 containerd[1485]: time="2025-01-13T20:44:35.346632120Z" level=info msg="RemovePodSandbox \"7d794e4e4f4c28adeb885951683e018cb45571ba7106015b5ea3e34acbc876d3\" returns successfully" Jan 13 20:44:35.347050 containerd[1485]: time="2025-01-13T20:44:35.346903735Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:35.347050 containerd[1485]: time="2025-01-13T20:44:35.346983335Z" level=info msg="TearDown network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" successfully" Jan 13 20:44:35.347050 containerd[1485]: time="2025-01-13T20:44:35.346993424Z" level=info msg="StopPodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" returns successfully" Jan 13 20:44:35.347384 containerd[1485]: time="2025-01-13T20:44:35.347358144Z" level=info msg="RemovePodSandbox for \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:35.347384 containerd[1485]: time="2025-01-13T20:44:35.347382370Z" level=info msg="Forcibly stopping sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\"" Jan 13 20:44:35.347479 containerd[1485]: time="2025-01-13T20:44:35.347447633Z" level=info msg="TearDown network for sandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" successfully" Jan 13 20:44:35.351205 containerd[1485]: time="2025-01-13T20:44:35.351164415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.351205 containerd[1485]: time="2025-01-13T20:44:35.351202398Z" level=info msg="RemovePodSandbox \"425dab6ccae20e7d1a40b83f28db573eae0c06ac4ff6ba3e8f6a39aa97dfa6bf\" returns successfully" Jan 13 20:44:35.351440 containerd[1485]: time="2025-01-13T20:44:35.351413497Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" Jan 13 20:44:35.351522 containerd[1485]: time="2025-01-13T20:44:35.351500732Z" level=info msg="TearDown network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" successfully" Jan 13 20:44:35.351522 containerd[1485]: time="2025-01-13T20:44:35.351515149Z" level=info msg="StopPodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" returns successfully" Jan 13 20:44:35.351722 containerd[1485]: time="2025-01-13T20:44:35.351701623Z" level=info msg="RemovePodSandbox for \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" Jan 13 20:44:35.351722 containerd[1485]: time="2025-01-13T20:44:35.351719696Z" level=info msg="Forcibly stopping sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\"" Jan 13 20:44:35.351801 containerd[1485]: time="2025-01-13T20:44:35.351775031Z" level=info msg="TearDown network for sandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" successfully" Jan 13 20:44:35.355216 containerd[1485]: time="2025-01-13T20:44:35.355187958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.355277 containerd[1485]: time="2025-01-13T20:44:35.355219809Z" level=info msg="RemovePodSandbox \"a7d8f3c3d2fd5a6911cf6572718dc7c8a6b7c1c55df765dacedd40fc416d98d4\" returns successfully" Jan 13 20:44:35.355495 containerd[1485]: time="2025-01-13T20:44:35.355461596Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\"" Jan 13 20:44:35.355566 containerd[1485]: time="2025-01-13T20:44:35.355547378Z" level=info msg="TearDown network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" successfully" Jan 13 20:44:35.355566 containerd[1485]: time="2025-01-13T20:44:35.355561516Z" level=info msg="StopPodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" returns successfully" Jan 13 20:44:35.355804 containerd[1485]: time="2025-01-13T20:44:35.355777343Z" level=info msg="RemovePodSandbox for \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\"" Jan 13 20:44:35.355804 containerd[1485]: time="2025-01-13T20:44:35.355799776Z" level=info msg="Forcibly stopping sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\"" Jan 13 20:44:35.355899 containerd[1485]: time="2025-01-13T20:44:35.355869178Z" level=info msg="TearDown network for sandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" successfully" Jan 13 20:44:35.359382 containerd[1485]: time="2025-01-13T20:44:35.359350234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.359436 containerd[1485]: time="2025-01-13T20:44:35.359386372Z" level=info msg="RemovePodSandbox \"927c6708a8fcbdf6196f99fafe494806f2236694aecb3e4e53bea669488457d7\" returns successfully" Jan 13 20:44:35.359608 containerd[1485]: time="2025-01-13T20:44:35.359586732Z" level=info msg="StopPodSandbox for \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\"" Jan 13 20:44:35.359687 containerd[1485]: time="2025-01-13T20:44:35.359667384Z" level=info msg="TearDown network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" successfully" Jan 13 20:44:35.359687 containerd[1485]: time="2025-01-13T20:44:35.359680990Z" level=info msg="StopPodSandbox for \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" returns successfully" Jan 13 20:44:35.360848 containerd[1485]: time="2025-01-13T20:44:35.359909893Z" level=info msg="RemovePodSandbox for \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\"" Jan 13 20:44:35.360848 containerd[1485]: time="2025-01-13T20:44:35.359932846Z" level=info msg="Forcibly stopping sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\"" Jan 13 20:44:35.360848 containerd[1485]: time="2025-01-13T20:44:35.360004342Z" level=info msg="TearDown network for sandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" successfully" Jan 13 20:44:35.363345 containerd[1485]: time="2025-01-13T20:44:35.363308643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.363408 containerd[1485]: time="2025-01-13T20:44:35.363348239Z" level=info msg="RemovePodSandbox \"1d6bb426eb15641d3bb6de30ed8eb16029b3508aa93d76976a18139b26d58c81\" returns successfully" Jan 13 20:44:35.363564 containerd[1485]: time="2025-01-13T20:44:35.363540392Z" level=info msg="StopPodSandbox for \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\"" Jan 13 20:44:35.363640 containerd[1485]: time="2025-01-13T20:44:35.363623279Z" level=info msg="TearDown network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\" successfully" Jan 13 20:44:35.363640 containerd[1485]: time="2025-01-13T20:44:35.363636183Z" level=info msg="StopPodSandbox for \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\" returns successfully" Jan 13 20:44:35.364794 containerd[1485]: time="2025-01-13T20:44:35.363862772Z" level=info msg="RemovePodSandbox for \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\"" Jan 13 20:44:35.364794 containerd[1485]: time="2025-01-13T20:44:35.363886467Z" level=info msg="Forcibly stopping sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\"" Jan 13 20:44:35.364794 containerd[1485]: time="2025-01-13T20:44:35.363951319Z" level=info msg="TearDown network for sandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\" successfully" Jan 13 20:44:35.367387 containerd[1485]: time="2025-01-13T20:44:35.367364608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.367508 containerd[1485]: time="2025-01-13T20:44:35.367403631Z" level=info msg="RemovePodSandbox \"7d07321a2e8bef952e833ba77d89a50fc532694dd37d6a25bbffdbdac025bb25\" returns successfully" Jan 13 20:44:35.367662 containerd[1485]: time="2025-01-13T20:44:35.367640008Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:35.367738 containerd[1485]: time="2025-01-13T20:44:35.367720251Z" level=info msg="TearDown network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" successfully" Jan 13 20:44:35.367738 containerd[1485]: time="2025-01-13T20:44:35.367734257Z" level=info msg="StopPodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" returns successfully" Jan 13 20:44:35.368038 containerd[1485]: time="2025-01-13T20:44:35.367975914Z" level=info msg="RemovePodSandbox for \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:35.368038 containerd[1485]: time="2025-01-13T20:44:35.368003166Z" level=info msg="Forcibly stopping sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\"" Jan 13 20:44:35.368100 containerd[1485]: time="2025-01-13T20:44:35.368075162Z" level=info msg="TearDown network for sandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" successfully" Jan 13 20:44:35.371410 containerd[1485]: time="2025-01-13T20:44:35.371389994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.371461 containerd[1485]: time="2025-01-13T20:44:35.371421574Z" level=info msg="RemovePodSandbox \"ca5fa880c8472fbb4ce5da8448cab6999124831227ce447ca98c8fe1c32f45d2\" returns successfully" Jan 13 20:44:35.371635 containerd[1485]: time="2025-01-13T20:44:35.371615490Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" Jan 13 20:44:35.371708 containerd[1485]: time="2025-01-13T20:44:35.371695763Z" level=info msg="TearDown network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" successfully" Jan 13 20:44:35.371733 containerd[1485]: time="2025-01-13T20:44:35.371707445Z" level=info msg="StopPodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" returns successfully" Jan 13 20:44:35.372656 containerd[1485]: time="2025-01-13T20:44:35.371891864Z" level=info msg="RemovePodSandbox for \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" Jan 13 20:44:35.372656 containerd[1485]: time="2025-01-13T20:44:35.371924315Z" level=info msg="Forcibly stopping sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\"" Jan 13 20:44:35.372656 containerd[1485]: time="2025-01-13T20:44:35.371993787Z" level=info msg="TearDown network for sandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" successfully" Jan 13 20:44:35.375369 containerd[1485]: time="2025-01-13T20:44:35.375341140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.375432 containerd[1485]: time="2025-01-13T20:44:35.375385324Z" level=info msg="RemovePodSandbox \"eebb0fce572a9eab34c5f1ceba88949fb88b0aecab74892d98a6fe82b94e1983\" returns successfully" Jan 13 20:44:35.375635 containerd[1485]: time="2025-01-13T20:44:35.375606242Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\"" Jan 13 20:44:35.375716 containerd[1485]: time="2025-01-13T20:44:35.375700570Z" level=info msg="TearDown network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" successfully" Jan 13 20:44:35.375716 containerd[1485]: time="2025-01-13T20:44:35.375713274Z" level=info msg="StopPodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" returns successfully" Jan 13 20:44:35.375968 containerd[1485]: time="2025-01-13T20:44:35.375941666Z" level=info msg="RemovePodSandbox for \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\"" Jan 13 20:44:35.375968 containerd[1485]: time="2025-01-13T20:44:35.375966153Z" level=info msg="Forcibly stopping sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\"" Jan 13 20:44:35.376085 containerd[1485]: time="2025-01-13T20:44:35.376054780Z" level=info msg="TearDown network for sandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" successfully" Jan 13 20:44:35.379256 containerd[1485]: time="2025-01-13T20:44:35.379235929Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.379323 containerd[1485]: time="2025-01-13T20:44:35.379266186Z" level=info msg="RemovePodSandbox \"446f3b33acafd0b481ab5a1e7a25d2eb9b7ab4660c872f782d139e7191130cfd\" returns successfully" Jan 13 20:44:35.379531 containerd[1485]: time="2025-01-13T20:44:35.379490130Z" level=info msg="StopPodSandbox for \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\"" Jan 13 20:44:35.379618 containerd[1485]: time="2025-01-13T20:44:35.379574038Z" level=info msg="TearDown network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" successfully" Jan 13 20:44:35.379618 containerd[1485]: time="2025-01-13T20:44:35.379589729Z" level=info msg="StopPodSandbox for \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" returns successfully" Jan 13 20:44:35.379791 containerd[1485]: time="2025-01-13T20:44:35.379766874Z" level=info msg="RemovePodSandbox for \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\"" Jan 13 20:44:35.379791 containerd[1485]: time="2025-01-13T20:44:35.379789927Z" level=info msg="Forcibly stopping sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\"" Jan 13 20:44:35.379881 containerd[1485]: time="2025-01-13T20:44:35.379852836Z" level=info msg="TearDown network for sandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" successfully" Jan 13 20:44:35.383206 containerd[1485]: time="2025-01-13T20:44:35.383178569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.383264 containerd[1485]: time="2025-01-13T20:44:35.383212533Z" level=info msg="RemovePodSandbox \"fcae0926afbd25d9917ec37083f97b582b911060232dcf8a74d0035e62fef0b0\" returns successfully" Jan 13 20:44:35.383438 containerd[1485]: time="2025-01-13T20:44:35.383411730Z" level=info msg="StopPodSandbox for \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\"" Jan 13 20:44:35.383506 containerd[1485]: time="2025-01-13T20:44:35.383487583Z" level=info msg="TearDown network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\" successfully" Jan 13 20:44:35.383506 containerd[1485]: time="2025-01-13T20:44:35.383500729Z" level=info msg="StopPodSandbox for \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\" returns successfully" Jan 13 20:44:35.383734 containerd[1485]: time="2025-01-13T20:44:35.383709834Z" level=info msg="RemovePodSandbox for \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\"" Jan 13 20:44:35.383734 containerd[1485]: time="2025-01-13T20:44:35.383729903Z" level=info msg="Forcibly stopping sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\"" Jan 13 20:44:35.383828 containerd[1485]: time="2025-01-13T20:44:35.383798131Z" level=info msg="TearDown network for sandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\" successfully" Jan 13 20:44:35.387374 containerd[1485]: time="2025-01-13T20:44:35.387338690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:44:35.387422 containerd[1485]: time="2025-01-13T20:44:35.387377283Z" level=info msg="RemovePodSandbox \"4665def6fef99c3a0986dce09dfa0fd10d32cb9081ecf57f29c76ab665ac6e33\" returns successfully" Jan 13 20:44:39.446948 kubelet[2663]: I0113 20:44:39.446891 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:44:39.774840 systemd[1]: Started sshd@19-10.0.0.148:22-10.0.0.1:59012.service - OpenSSH per-connection server daemon (10.0.0.1:59012). Jan 13 20:44:39.815598 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 59012 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:39.817110 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:39.821047 systemd-logind[1473]: New session 20 of user core. Jan 13 20:44:39.831139 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:44:39.935524 sshd[5909]: Connection closed by 10.0.0.1 port 59012 Jan 13 20:44:39.935882 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:39.939490 systemd[1]: sshd@19-10.0.0.148:22-10.0.0.1:59012.service: Deactivated successfully. Jan 13 20:44:39.941571 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:44:39.942152 systemd-logind[1473]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:44:39.943035 systemd-logind[1473]: Removed session 20. Jan 13 20:44:44.952729 systemd[1]: Started sshd@20-10.0.0.148:22-10.0.0.1:39542.service - OpenSSH per-connection server daemon (10.0.0.1:39542). Jan 13 20:44:44.993037 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 39542 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:44.994481 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:44.998663 systemd-logind[1473]: New session 21 of user core. Jan 13 20:44:45.008140 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:44:45.122070 sshd[5926]: Connection closed by 10.0.0.1 port 39542 Jan 13 20:44:45.122651 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:45.128235 systemd[1]: sshd@20-10.0.0.148:22-10.0.0.1:39542.service: Deactivated successfully. Jan 13 20:44:45.130464 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:44:45.131258 systemd-logind[1473]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:44:45.132149 systemd-logind[1473]: Removed session 21. Jan 13 20:44:49.215607 kubelet[2663]: E0113 20:44:49.215565 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:50.135394 systemd[1]: Started sshd@21-10.0.0.148:22-10.0.0.1:39552.service - OpenSSH per-connection server daemon (10.0.0.1:39552). Jan 13 20:44:50.192915 sshd[5941]: Accepted publickey for core from 10.0.0.1 port 39552 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:50.194824 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:50.199095 systemd-logind[1473]: New session 22 of user core. Jan 13 20:44:50.206227 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:44:50.341221 sshd[5943]: Connection closed by 10.0.0.1 port 39552 Jan 13 20:44:50.341641 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:50.346348 systemd[1]: sshd@21-10.0.0.148:22-10.0.0.1:39552.service: Deactivated successfully. Jan 13 20:44:50.349785 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:44:50.350734 systemd-logind[1473]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:44:50.352265 systemd-logind[1473]: Removed session 22. Jan 13 20:44:55.357230 systemd[1]: Started sshd@22-10.0.0.148:22-10.0.0.1:58210.service - OpenSSH per-connection server daemon (10.0.0.1:58210). Jan 13 20:44:55.397998 sshd[5957]: Accepted publickey for core from 10.0.0.1 port 58210 ssh2: RSA SHA256:6qkPuoLJ5YUfKJKPOJceaaQygSTwShKr6otktL0ZvJ8 Jan 13 20:44:55.399551 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:55.403526 systemd-logind[1473]: New session 23 of user core. Jan 13 20:44:55.414148 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:44:55.523436 sshd[5959]: Connection closed by 10.0.0.1 port 58210 Jan 13 20:44:55.523784 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:55.527460 systemd[1]: sshd@22-10.0.0.148:22-10.0.0.1:58210.service: Deactivated successfully. Jan 13 20:44:55.529979 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:44:55.530708 systemd-logind[1473]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:44:55.531553 systemd-logind[1473]: Removed session 23. Jan 13 20:44:55.995080 kubelet[2663]: E0113 20:44:55.995054 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:44:56.215273 kubelet[2663]: E0113 20:44:56.215209 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"