Jan 29 16:25:25.897529 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:25:25.897549 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:25.897560 kernel: BIOS-provided physical RAM map: Jan 29 16:25:25.897567 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:25:25.897574 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:25:25.897580 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:25:25.897588 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 16:25:25.897594 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 16:25:25.897601 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:25:25.897610 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:25:25.897632 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:25:25.897639 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:25:25.897645 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:25:25.897652 kernel: NX (Execute Disable) protection: active Jan 29 16:25:25.897660 kernel: APIC: Static calls initialized Jan 29 16:25:25.897670 kernel: SMBIOS 2.8 present. Jan 29 16:25:25.897677 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 16:25:25.897684 kernel: Hypervisor detected: KVM Jan 29 16:25:25.897691 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:25:25.897698 kernel: kvm-clock: using sched offset of 2864944084 cycles Jan 29 16:25:25.897705 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:25:25.897712 kernel: tsc: Detected 2794.748 MHz processor Jan 29 16:25:25.897720 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:25:25.897727 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:25:25.897734 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 16:25:25.897744 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:25:25.897751 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:25:25.897758 kernel: Using GB pages for direct mapping Jan 29 16:25:25.897766 kernel: ACPI: Early table checksum verification disabled Jan 29 16:25:25.897773 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 16:25:25.897780 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:25.897787 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:25.897794 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:25.897801 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 16:25:25.897811 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:25.897818 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:25.897825 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:25.897832 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:25.897840 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 16:25:25.897847 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 16:25:25.897858 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 16:25:25.897867 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 16:25:25.897875 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 16:25:25.897882 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 16:25:25.897890 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 16:25:25.897897 kernel: No NUMA configuration found Jan 29 16:25:25.897904 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 16:25:25.897912 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 16:25:25.897922 kernel: Zone ranges: Jan 29 16:25:25.897929 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:25:25.897936 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 16:25:25.897944 kernel: Normal empty Jan 29 16:25:25.897951 kernel: Movable zone start for each node Jan 29 16:25:25.897958 kernel: Early memory node ranges Jan 29 16:25:25.897966 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:25:25.897973 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 16:25:25.897981 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 16:25:25.897990 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:25:25.897998 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:25:25.898005 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:25:25.898012 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:25:25.898020 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:25:25.898027 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:25:25.898035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:25:25.898042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:25:25.898049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:25:25.898059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:25:25.898066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:25:25.898074 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:25:25.898081 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:25:25.898097 kernel: TSC deadline timer available Jan 29 16:25:25.898104 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 16:25:25.898112 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:25:25.898119 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 16:25:25.898126 kernel: kvm-guest: setup PV sched yield Jan 29 16:25:25.898134 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:25:25.898143 kernel: Booting paravirtualized kernel on KVM Jan 29 16:25:25.898151 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:25:25.898159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 16:25:25.898167 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 16:25:25.898174 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 16:25:25.898181 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 16:25:25.898188 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:25:25.898196 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:25:25.898204 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:25.898214 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:25:25.898222 kernel: random: crng init done Jan 29 16:25:25.898229 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:25:25.898237 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:25:25.898244 kernel: Fallback order for Node 0: 0 Jan 29 16:25:25.898252 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 16:25:25.898259 kernel: Policy zone: DMA32 Jan 29 16:25:25.898266 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:25:25.898276 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 138948K reserved, 0K cma-reserved) Jan 29 16:25:25.898284 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 16:25:25.898291 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:25:25.898298 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:25:25.898306 kernel: Dynamic Preempt: voluntary Jan 29 16:25:25.898313 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:25:25.898321 kernel: rcu: RCU event tracing is enabled. Jan 29 16:25:25.898329 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 16:25:25.898336 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:25:25.898346 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:25:25.898353 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:25:25.898361 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:25:25.898368 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 16:25:25.898376 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 16:25:25.898383 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:25:25.898390 kernel: Console: colour VGA+ 80x25 Jan 29 16:25:25.898398 kernel: printk: console [ttyS0] enabled Jan 29 16:25:25.898405 kernel: ACPI: Core revision 20230628 Jan 29 16:25:25.898415 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:25:25.898422 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:25:25.898429 kernel: x2apic enabled Jan 29 16:25:25.898437 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:25:25.898444 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 16:25:25.898452 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 16:25:25.898459 kernel: kvm-guest: setup PV IPIs Jan 29 16:25:25.898476 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:25:25.898483 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:25:25.898491 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 16:25:25.898499 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:25:25.898507 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:25:25.898516 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:25:25.898524 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:25:25.898532 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:25:25.898540 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:25:25.898549 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:25:25.898560 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:25:25.898569 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:25:25.898578 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:25:25.898586 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:25:25.898594 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:25:25.898602 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:25:25.898610 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:25:25.898629 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:25:25.898640 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:25:25.898647 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:25:25.898655 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:25:25.898663 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:25:25.898671 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:25:25.898678 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:25:25.898686 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:25:25.898694 kernel: landlock: Up and running. Jan 29 16:25:25.898701 kernel: SELinux: Initializing. Jan 29 16:25:25.898712 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:25:25.898719 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:25:25.898727 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:25:25.898735 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:25:25.898743 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:25:25.898751 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:25:25.898759 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:25:25.898766 kernel: ... version: 0 Jan 29 16:25:25.898774 kernel: ... bit width: 48 Jan 29 16:25:25.898784 kernel: ... generic registers: 6 Jan 29 16:25:25.898792 kernel: ... value mask: 0000ffffffffffff Jan 29 16:25:25.898799 kernel: ... max period: 00007fffffffffff Jan 29 16:25:25.898807 kernel: ... fixed-purpose events: 0 Jan 29 16:25:25.898815 kernel: ... event mask: 000000000000003f Jan 29 16:25:25.898822 kernel: signal: max sigframe size: 1776 Jan 29 16:25:25.898830 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:25:25.898838 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:25:25.898845 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:25:25.898855 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:25:25.898863 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 16:25:25.898881 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 16:25:25.898889 kernel: smpboot: Max logical packages: 1 Jan 29 16:25:25.898897 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 16:25:25.898911 kernel: devtmpfs: initialized Jan 29 16:25:25.898920 kernel: x86/mm: Memory block size: 128MB Jan 29 16:25:25.898936 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:25:25.898957 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 16:25:25.898975 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:25:25.898984 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:25:25.898992 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:25:25.899013 kernel: audit: type=2000 audit(1738167925.935:1): state=initialized audit_enabled=0 res=1 Jan 29 16:25:25.899022 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:25:25.899029 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:25:25.899037 kernel: cpuidle: using governor menu Jan 29 16:25:25.899045 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:25:25.899052 kernel: dca service started, version 1.12.1 Jan 29 16:25:25.899063 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:25:25.899071 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 16:25:25.899078 kernel: PCI: Using configuration type 1 for base access Jan 29 16:25:25.899094 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:25:25.899102 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:25:25.899109 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:25:25.899117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:25:25.899125 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:25:25.899132 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:25:25.899143 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:25:25.899150 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:25:25.899158 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:25:25.899166 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:25:25.899173 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:25:25.899181 kernel: ACPI: Interpreter enabled Jan 29 16:25:25.899189 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:25:25.899196 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:25:25.899204 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:25:25.899215 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:25:25.899222 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:25:25.899230 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:25:25.899433 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:25:25.899567 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:25:25.899719 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:25:25.899734 kernel: PCI host bridge to bus 0000:00 Jan 29 16:25:25.899913 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:25:25.900042 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:25:25.900170 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:25:25.900285 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 16:25:25.900428 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:25:25.900590 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:25:25.900896 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:25:25.901124 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:25:25.901315 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 16:25:25.901482 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 16:25:25.901722 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 16:25:25.901911 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 16:25:25.902070 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:25:25.902263 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:25:25.902429 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 16:25:25.902586 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 16:25:25.902765 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 16:25:25.902938 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:25:25.903198 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:25:25.903359 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 16:25:25.903522 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 16:25:25.903724 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:25:25.903886 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 16:25:25.904094 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 16:25:25.904257 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 16:25:25.904414 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 16:25:25.904598 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:25:25.904782 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:25:25.904960 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:25:25.905129 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 16:25:25.905287 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 16:25:25.905464 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:25:25.905646 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:25:25.905663 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:25:25.905679 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:25:25.905689 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:25:25.905700 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:25:25.905711 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:25:25.905721 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:25:25.905732 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:25:25.905742 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:25:25.905753 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:25:25.905763 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:25:25.905777 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:25:25.905788 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:25:25.905799 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:25:25.905809 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:25:25.905820 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:25:25.905830 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:25:25.905840 kernel: iommu: Default domain type: Translated Jan 29 16:25:25.905851 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:25:25.905861 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:25:25.905875 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:25:25.905885 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:25:25.905896 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 16:25:25.906055 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:25:25.906224 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:25:25.906383 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:25:25.906398 kernel: vgaarb: loaded Jan 29 16:25:25.906409 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:25:25.906424 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:25:25.906435 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:25:25.906446 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:25:25.906456 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:25:25.906467 kernel: pnp: PnP ACPI init Jan 29 16:25:25.906694 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:25:25.906712 kernel: pnp: PnP ACPI: found 6 devices Jan 29 16:25:25.906723 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:25:25.906738 kernel: NET: Registered PF_INET protocol family Jan 29 16:25:25.906748 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:25:25.906759 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:25:25.906770 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:25:25.906780 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:25:25.906791 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:25:25.906801 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:25:25.906812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:25:25.906822 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:25:25.906836 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:25:25.906847 kernel: NET: Registered PF_XDP protocol family Jan 29 16:25:25.906991 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:25:25.907144 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:25:25.907287 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:25:25.907430 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 16:25:25.907573 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:25:25.907731 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:25:25.907752 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:25:25.907762 kernel: Initialise system trusted keyrings Jan 29 16:25:25.907773 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:25:25.907783 kernel: Key type asymmetric registered Jan 29 16:25:25.907794 kernel: Asymmetric key parser 'x509' registered Jan 29 16:25:25.907805 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:25:25.907815 kernel: io scheduler mq-deadline registered Jan 29 16:25:25.907826 kernel: io scheduler kyber registered Jan 29 16:25:25.907837 kernel: io scheduler bfq registered Jan 29 16:25:25.907850 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:25:25.907861 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:25:25.907872 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:25:25.907883 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 16:25:25.907893 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:25:25.907904 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:25:25.907915 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:25:25.907925 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:25:25.907936 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:25:25.908118 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:25:25.908135 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:25:25.908279 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:25:25.908425 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:25:25 UTC (1738167925) Jan 29 16:25:25.908571 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:25:25.908586 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:25:25.908597 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:25:25.908607 kernel: Segment Routing with IPv6 Jan 29 16:25:25.908723 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:25:25.908735 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:25:25.908746 kernel: Key type dns_resolver registered Jan 29 16:25:25.908756 kernel: IPI shorthand broadcast: enabled Jan 29 16:25:25.908767 kernel: sched_clock: Marking stable (601002373, 106072681)->(757317450, -50242396) Jan 29 16:25:25.908777 kernel: registered taskstats version 1 Jan 29 16:25:25.908788 kernel: Loading compiled-in X.509 certificates Jan 29 16:25:25.908799 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:25:25.908809 kernel: Key type .fscrypt registered Jan 29 16:25:25.908823 kernel: Key type fscrypt-provisioning registered Jan 29 16:25:25.908834 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:25:25.908845 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:25:25.908856 kernel: ima: No architecture policies found Jan 29 16:25:25.908866 kernel: clk: Disabling unused clocks Jan 29 16:25:25.908876 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:25:25.908887 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:25:25.908898 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:25:25.908908 kernel: Run /init as init process Jan 29 16:25:25.908922 kernel: with arguments: Jan 29 16:25:25.908933 kernel: /init Jan 29 16:25:25.908943 kernel: with environment: Jan 29 16:25:25.908953 kernel: HOME=/ Jan 29 16:25:25.908964 kernel: TERM=linux Jan 29 16:25:25.908974 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:25:25.908986 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:25:25.909001 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:25:25.909016 systemd[1]: Detected virtualization kvm. Jan 29 16:25:25.909027 systemd[1]: Detected architecture x86-64. Jan 29 16:25:25.909038 systemd[1]: Running in initrd. Jan 29 16:25:25.909050 systemd[1]: No hostname configured, using default hostname. Jan 29 16:25:25.909061 systemd[1]: Hostname set to . Jan 29 16:25:25.909072 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:25:25.909091 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:25:25.909103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:25.909118 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:25.909145 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:25:25.909159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:25:25.909171 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:25:25.909184 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:25:25.909200 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:25:25.909212 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:25:25.909224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:25.909235 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:25.909247 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:25:25.909258 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:25:25.909270 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:25:25.909281 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:25:25.909296 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:25:25.909307 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:25:25.909319 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:25:25.909331 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:25:25.909343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:25.909354 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:25.909366 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:25.909377 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:25:25.909389 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:25:25.909404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:25:25.909415 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:25:25.909427 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:25:25.909438 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:25:25.909450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:25:25.909462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:25.909477 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:25:25.909488 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:25.909504 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:25:25.909545 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 16:25:25.909577 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:25:25.909589 systemd-journald[194]: Journal started Jan 29 16:25:25.909630 systemd-journald[194]: Runtime Journal (/run/log/journal/d502f4f78c4f45afb069db93854fccac) is 6M, max 48.4M, 42.3M free. Jan 29 16:25:25.901833 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 16:25:25.935952 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:25:25.935972 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:25:25.935987 kernel: Bridge firewalling registered Jan 29 16:25:25.928791 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 16:25:25.936213 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:25.936832 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:25.952821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:25.953760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:25.955770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:25:25.967994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:25.970790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:25.971287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:25.986836 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:25:25.990414 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:25:25.993931 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:25.998423 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:25:26.004118 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:26.014016 dracut-cmdline[232]: dracut-dracut-053 Jan 29 16:25:26.017031 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:26.033229 systemd-resolved[226]: Positive Trust Anchors: Jan 29 16:25:26.033249 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:25:26.033289 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:25:26.036092 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 29 16:25:26.037303 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:25:26.046269 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:26.093651 kernel: SCSI subsystem initialized Jan 29 16:25:26.102649 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:25:26.112651 kernel: iscsi: registered transport (tcp) Jan 29 16:25:26.133640 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:25:26.133666 kernel: QLogic iSCSI HBA Driver Jan 29 16:25:26.180772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:25:26.191752 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:25:26.220921 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:25:26.220956 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:25:26.222293 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:25:26.262644 kernel: raid6: avx2x4 gen() 30537 MB/s Jan 29 16:25:26.279639 kernel: raid6: avx2x2 gen() 31041 MB/s Jan 29 16:25:26.296736 kernel: raid6: avx2x1 gen() 25980 MB/s Jan 29 16:25:26.296755 kernel: raid6: using algorithm avx2x2 gen() 31041 MB/s Jan 29 16:25:26.314745 kernel: raid6: .... xor() 19827 MB/s, rmw enabled Jan 29 16:25:26.314768 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:25:26.334640 kernel: xor: automatically using best checksumming function avx Jan 29 16:25:26.477648 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:25:26.490186 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:25:26.497819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:26.512992 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 29 16:25:26.518445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:26.530752 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:25:26.543167 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 29 16:25:26.573844 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:25:26.586767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:25:26.651503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:26.662108 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:25:26.673839 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:25:26.676257 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:25:26.677657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:26.681183 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:25:26.690809 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:25:26.706689 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:25:26.711642 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 16:25:26.732379 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:25:26.732402 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 16:25:26.732724 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:25:26.732763 kernel: GPT:9289727 != 19775487 Jan 29 16:25:26.732795 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:25:26.732837 kernel: GPT:9289727 != 19775487 Jan 29 16:25:26.732862 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:25:26.732894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:26.722672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:25:26.739757 kernel: libata version 3.00 loaded. Jan 29 16:25:26.722843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:26.724882 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:26.727532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:25:26.727707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:26.733366 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:26.745231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:26.757098 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:25:26.757122 kernel: AES CTR mode by8 optimization enabled Jan 29 16:25:26.757133 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:25:26.777562 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:25:26.777579 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:25:26.778814 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:25:26.779027 kernel: scsi host0: ahci Jan 29 16:25:26.779206 kernel: scsi host1: ahci Jan 29 16:25:26.779367 kernel: scsi host2: ahci Jan 29 16:25:26.779520 kernel: scsi host3: ahci Jan 29 16:25:26.780740 kernel: scsi host4: ahci Jan 29 16:25:26.780905 kernel: scsi host5: ahci Jan 29 16:25:26.781071 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 16:25:26.781086 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 16:25:26.781100 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 16:25:26.781112 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 16:25:26.781122 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 16:25:26.781136 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 16:25:26.781146 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (461) Jan 29 16:25:26.784505 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:25:26.826260 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (477) Jan 29 16:25:26.831089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:26.846955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:25:26.860389 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:25:26.860996 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:25:26.872971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:25:26.885759 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:25:26.887609 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:26.909652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:27.025767 disk-uuid[557]: Primary Header is updated. Jan 29 16:25:27.025767 disk-uuid[557]: Secondary Entries is updated. Jan 29 16:25:27.025767 disk-uuid[557]: Secondary Header is updated. Jan 29 16:25:27.029711 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:27.034658 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:27.089692 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:27.089750 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:27.093648 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:27.093686 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:27.093697 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:27.095381 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:25:27.097154 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:25:27.097177 kernel: ata3.00: applying bridge limits Jan 29 16:25:27.097654 kernel: ata3.00: configured for UDMA/100 Jan 29 16:25:27.101656 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:25:27.165669 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:25:27.179280 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:25:27.179294 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:25:28.035647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:28.035987 disk-uuid[566]: The operation has completed successfully. Jan 29 16:25:28.067331 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:25:28.067455 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:25:28.109726 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:25:28.114798 sh[594]: Success Jan 29 16:25:28.126636 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:25:28.160895 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:25:28.180057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:25:28.182706 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:25:28.193155 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:25:28.193184 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:28.193195 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:25:28.194874 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:25:28.194888 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:25:28.199412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:25:28.200351 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:25:28.214780 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:25:28.216734 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:25:28.228752 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:28.228791 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:28.228803 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:25:28.231687 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:25:28.241285 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:25:28.243100 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:28.252677 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:25:28.262781 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:25:28.320277 ignition[691]: Ignition 2.20.0 Jan 29 16:25:28.320291 ignition[691]: Stage: fetch-offline Jan 29 16:25:28.320335 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:28.320349 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:28.320478 ignition[691]: parsed url from cmdline: "" Jan 29 16:25:28.320484 ignition[691]: no config URL provided Jan 29 16:25:28.320491 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:25:28.320504 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:25:28.320532 ignition[691]: op(1): [started] loading QEMU firmware config module Jan 29 16:25:28.320538 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 16:25:28.327554 ignition[691]: op(1): [finished] loading QEMU firmware config module Jan 29 16:25:28.341523 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:25:28.353761 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:25:28.369034 ignition[691]: parsing config with SHA512: 79680ee7deee29cdb20f5189a854c02cc0b7de1019711f21eac2c287d38d4f9c7ffba40e7505bc90a70b6c3655422159fbb746d2b447331d42a861fbbdadc965 Jan 29 16:25:28.374243 unknown[691]: fetched base config from "system" Jan 29 16:25:28.375061 unknown[691]: fetched user config from "qemu" Jan 29 16:25:28.375404 ignition[691]: fetch-offline: fetch-offline passed Jan 29 16:25:28.375475 ignition[691]: Ignition finished successfully Jan 29 16:25:28.378930 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:25:28.382476 systemd-networkd[783]: lo: Link UP Jan 29 16:25:28.382488 systemd-networkd[783]: lo: Gained carrier Jan 29 16:25:28.384261 systemd-networkd[783]: Enumeration completed Jan 29 16:25:28.384371 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:25:28.384671 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:28.384676 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:25:28.385792 systemd-networkd[783]: eth0: Link UP Jan 29 16:25:28.385796 systemd-networkd[783]: eth0: Gained carrier Jan 29 16:25:28.385804 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:28.386687 systemd[1]: Reached target network.target - Network. Jan 29 16:25:28.388533 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:25:28.399771 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:25:28.408679 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:25:28.415558 ignition[787]: Ignition 2.20.0 Jan 29 16:25:28.415568 ignition[787]: Stage: kargs Jan 29 16:25:28.415740 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:28.415752 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:28.416495 ignition[787]: kargs: kargs passed Jan 29 16:25:28.416535 ignition[787]: Ignition finished successfully Jan 29 16:25:28.419749 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:25:28.431849 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:25:28.444005 ignition[797]: Ignition 2.20.0 Jan 29 16:25:28.444025 ignition[797]: Stage: disks Jan 29 16:25:28.444171 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:28.444182 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:28.444940 ignition[797]: disks: disks passed Jan 29 16:25:28.444980 ignition[797]: Ignition finished successfully Jan 29 16:25:28.450411 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:25:28.451053 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:25:28.452583 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:25:28.452916 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:25:28.453247 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:25:28.453583 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:25:28.472915 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:25:28.499087 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:25:28.505582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:25:29.205706 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:25:29.290637 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:25:29.291075 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:25:29.293419 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:25:29.305714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:25:29.308286 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:25:29.310640 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:25:29.310688 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:25:29.319241 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Jan 29 16:25:29.319268 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:29.319284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:29.319298 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:25:29.310712 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:25:29.321640 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:25:29.323206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:25:29.325097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:25:29.344746 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:25:29.376926 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:25:29.382450 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:25:29.387660 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:25:29.392571 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:25:29.482817 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:25:29.498757 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:25:29.499950 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:25:29.511665 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:29.529572 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:25:29.538066 ignition[929]: INFO : Ignition 2.20.0 Jan 29 16:25:29.538066 ignition[929]: INFO : Stage: mount Jan 29 16:25:29.539789 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:29.539789 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:29.542338 ignition[929]: INFO : mount: mount passed Jan 29 16:25:29.543146 ignition[929]: INFO : Ignition finished successfully Jan 29 16:25:29.546025 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:25:29.558797 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:25:29.976832 systemd-networkd[783]: eth0: Gained IPv6LL Jan 29 16:25:30.192750 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:25:30.205946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:25:30.213652 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 29 16:25:30.217708 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:30.217772 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:30.217784 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:25:30.220647 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:25:30.222571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:25:30.247044 ignition[959]: INFO : Ignition 2.20.0 Jan 29 16:25:30.247044 ignition[959]: INFO : Stage: files Jan 29 16:25:30.248912 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:30.248912 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:30.248912 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:25:30.253037 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:25:30.253037 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:25:30.256200 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:25:30.257547 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:25:30.257547 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:25:30.256845 unknown[959]: wrote ssh authorized keys file for user: core Jan 29 16:25:30.261325 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 16:25:30.261325 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 16:25:30.303735 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:25:30.448708 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:25:30.450925 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 16:25:30.832765 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 16:25:31.157268 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:25:31.157268 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 16:25:31.160820 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:25:31.162793 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:25:31.162793 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 16:25:31.162793 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 16:25:31.162793 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:25:31.162793 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:25:31.162793 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 16:25:31.162793 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:25:31.182120 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:25:31.186078 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:25:31.187811 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:25:31.187811 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:25:31.187811 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:25:31.187811 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:25:31.187811 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:25:31.187811 ignition[959]: INFO : files: files passed Jan 29 16:25:31.187811 ignition[959]: INFO : Ignition finished successfully Jan 29 16:25:31.199208 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:25:31.210763 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:25:31.212705 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:25:31.219308 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:25:31.219465 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:25:31.224885 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 16:25:31.228476 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:31.228476 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:31.232103 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:31.235051 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:25:31.235736 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:25:31.250765 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:25:31.271114 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:25:31.272235 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:25:31.274960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:25:31.277135 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:25:31.279348 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:25:31.281703 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:25:31.296665 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:25:31.309752 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:25:31.320990 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:31.323545 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:31.326182 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:25:31.328214 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:25:31.329353 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:25:31.332179 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:25:31.334428 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:25:31.336422 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:25:31.338849 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:25:31.341427 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:25:31.343896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:25:31.346189 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:25:31.348929 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:25:31.351248 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:25:31.353524 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:25:31.355338 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:25:31.356480 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:25:31.359008 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:31.361349 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:31.363925 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:25:31.365045 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:31.367702 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:25:31.368811 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:25:31.371263 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:25:31.372445 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:25:31.375067 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:25:31.377019 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:25:31.381660 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:31.384602 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:25:31.386631 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:25:31.388491 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:25:31.389353 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:25:31.391302 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:25:31.392180 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:25:31.394214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:25:31.395382 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:25:31.397864 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:25:31.398850 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:25:31.416790 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:25:31.419428 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:25:31.421522 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:25:31.422806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:31.425590 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:25:31.426902 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:25:31.427777 ignition[1013]: INFO : Ignition 2.20.0 Jan 29 16:25:31.428887 ignition[1013]: INFO : Stage: umount Jan 29 16:25:31.430895 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:31.430895 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:31.430895 ignition[1013]: INFO : umount: umount passed Jan 29 16:25:31.430895 ignition[1013]: INFO : Ignition finished successfully Jan 29 16:25:31.437609 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:25:31.438767 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:25:31.444109 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:25:31.445348 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:25:31.449125 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:25:31.450824 systemd[1]: Stopped target network.target - Network. Jan 29 16:25:31.452839 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:25:31.453995 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:25:31.456159 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:25:31.456229 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:25:31.459340 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:25:31.460401 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:25:31.462394 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:25:31.463436 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:25:31.465958 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:25:31.468296 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:25:31.476699 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:25:31.477720 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:25:31.481895 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:25:31.483331 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:25:31.484322 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:25:31.487422 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:25:31.489291 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:25:31.490224 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:31.503722 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:25:31.504134 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:25:31.504187 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:25:31.504506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:25:31.504548 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:31.509853 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:25:31.509902 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:31.510301 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:25:31.510343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:31.514756 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:31.517180 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:25:31.517254 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:25:31.523847 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:25:31.523983 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:25:31.537414 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:25:31.537591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:31.539812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:25:31.539858 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:31.543258 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:25:31.543296 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:31.545232 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:25:31.545281 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:25:31.547471 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:25:31.547520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:25:31.549395 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:25:31.549447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:31.559823 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:25:31.562038 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:25:31.562111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:31.565572 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:25:31.565638 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:31.567915 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:25:31.567973 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:31.570524 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:25:31.570573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:31.573570 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:25:31.573658 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:25:31.574029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:25:31.574142 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:25:31.620637 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:25:31.620780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:25:31.621921 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:25:31.625236 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:25:31.625319 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:25:31.634842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:25:31.643083 systemd[1]: Switching root. Jan 29 16:25:31.676368 systemd-journald[194]: Journal stopped Jan 29 16:25:33.025017 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 16:25:33.025094 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:25:33.025113 kernel: SELinux: policy capability open_perms=1 Jan 29 16:25:33.025128 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:25:33.025143 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:25:33.025157 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:25:33.025172 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:25:33.025187 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:25:33.025202 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:25:33.025217 kernel: audit: type=1403 audit(1738167932.223:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:25:33.025243 systemd[1]: Successfully loaded SELinux policy in 43.597ms. Jan 29 16:25:33.025277 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.805ms. Jan 29 16:25:33.025295 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:25:33.025312 systemd[1]: Detected virtualization kvm. Jan 29 16:25:33.025328 systemd[1]: Detected architecture x86-64. Jan 29 16:25:33.025343 systemd[1]: Detected first boot. Jan 29 16:25:33.025359 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:25:33.025378 zram_generator::config[1060]: No configuration found. Jan 29 16:25:33.025398 kernel: Guest personality initialized and is inactive Jan 29 16:25:33.025413 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:25:33.025429 kernel: Initialized host personality Jan 29 16:25:33.025443 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:25:33.025465 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:25:33.025482 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:25:33.025499 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:25:33.025515 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:25:33.025531 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:33.025551 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:25:33.025568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:25:33.025584 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:25:33.025607 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:25:33.025641 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:25:33.025659 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:25:33.025675 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:25:33.025691 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:25:33.025708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:33.025729 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:33.025746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:25:33.025765 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:25:33.025783 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:25:33.025799 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:25:33.025815 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:25:33.025832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:33.025853 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:25:33.025869 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:25:33.025886 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:25:33.025913 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:25:33.025929 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:33.025946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:25:33.025962 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:25:33.025978 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:25:33.025994 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:25:33.026015 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:25:33.026031 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:25:33.026047 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:33.026064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:33.026080 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:33.026096 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:25:33.026112 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:25:33.026131 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:25:33.026147 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:25:33.026167 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:33.026183 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:25:33.026200 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:25:33.026216 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:25:33.026233 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:25:33.026249 systemd[1]: Reached target machines.target - Containers. Jan 29 16:25:33.026265 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:25:33.026281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:33.026298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:25:33.026317 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:25:33.026334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:33.026350 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:33.026366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:33.026382 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:25:33.026398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:33.026414 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:25:33.026430 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:25:33.026450 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:25:33.026466 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:25:33.026486 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:25:33.026504 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:33.026526 kernel: fuse: init (API version 7.39) Jan 29 16:25:33.026542 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:25:33.026558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:25:33.026575 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:25:33.026591 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:25:33.026610 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:25:33.026642 kernel: loop: module loaded Jan 29 16:25:33.026658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:25:33.026675 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:25:33.026690 systemd[1]: Stopped verity-setup.service. Jan 29 16:25:33.026707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:33.026727 kernel: ACPI: bus type drm_connector registered Jan 29 16:25:33.026743 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:25:33.026781 systemd-journald[1131]: Collecting audit messages is disabled. Jan 29 16:25:33.026811 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:25:33.026828 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:25:33.026845 systemd-journald[1131]: Journal started Jan 29 16:25:33.026878 systemd-journald[1131]: Runtime Journal (/run/log/journal/d502f4f78c4f45afb069db93854fccac) is 6M, max 48.4M, 42.3M free. Jan 29 16:25:32.778954 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:25:32.790653 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:25:32.791158 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:25:33.030165 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:25:33.030999 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:25:33.032255 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:25:33.033515 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:25:33.034868 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:25:33.036393 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:33.037971 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:25:33.038187 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:25:33.039738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:33.039962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:33.041386 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:33.041600 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:33.043098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:33.043311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:33.044862 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:25:33.045090 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:25:33.046484 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:33.046726 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:33.048343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:33.049810 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:25:33.051527 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:25:33.053161 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:25:33.069079 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:25:33.076757 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:25:33.079373 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:25:33.080664 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:25:33.080703 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:25:33.083106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:25:33.085878 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:25:33.090197 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:25:33.091671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:33.093398 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:25:33.098087 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:25:33.101486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:33.103766 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:25:33.105117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:33.107784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:33.114731 systemd-journald[1131]: Time spent on flushing to /var/log/journal/d502f4f78c4f45afb069db93854fccac is 17.116ms for 965 entries. Jan 29 16:25:33.114731 systemd-journald[1131]: System Journal (/var/log/journal/d502f4f78c4f45afb069db93854fccac) is 8M, max 195.6M, 187.6M free. Jan 29 16:25:33.152359 systemd-journald[1131]: Received client request to flush runtime journal. Jan 29 16:25:33.152399 kernel: loop0: detected capacity change from 0 to 147912 Jan 29 16:25:33.113764 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:25:33.119932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:25:33.125409 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:33.127772 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:25:33.134832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:25:33.138300 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:25:33.140217 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:25:33.151425 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:25:33.161363 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:25:33.161879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:25:33.165077 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:25:33.165615 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 29 16:25:33.165651 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 29 16:25:33.167547 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:25:33.169560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:33.178194 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:33.189848 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:25:33.196014 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:25:33.196736 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:25:33.197008 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:25:33.200406 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:25:33.219074 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:25:33.226924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:25:33.235655 kernel: loop2: detected capacity change from 0 to 218376 Jan 29 16:25:33.250454 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 29 16:25:33.250479 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 29 16:25:33.258734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:33.288654 kernel: loop3: detected capacity change from 0 to 147912 Jan 29 16:25:33.304828 kernel: loop4: detected capacity change from 0 to 138176 Jan 29 16:25:33.318646 kernel: loop5: detected capacity change from 0 to 218376 Jan 29 16:25:33.327997 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 16:25:33.328657 (sd-merge)[1208]: Merged extensions into '/usr'. Jan 29 16:25:33.332734 systemd[1]: Reload requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:25:33.332752 systemd[1]: Reloading... Jan 29 16:25:33.394645 zram_generator::config[1236]: No configuration found. Jan 29 16:25:33.449273 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:25:33.525052 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:33.615524 systemd[1]: Reloading finished in 282 ms. Jan 29 16:25:33.634394 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:25:33.636070 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:25:33.661441 systemd[1]: Starting ensure-sysext.service... Jan 29 16:25:33.663572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:25:33.684328 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:25:33.684458 systemd[1]: Reloading... Jan 29 16:25:33.685480 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:25:33.685777 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:25:33.686707 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:25:33.686987 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 29 16:25:33.687058 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 29 16:25:33.692512 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:33.692641 systemd-tmpfiles[1274]: Skipping /boot Jan 29 16:25:33.710074 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:33.710241 systemd-tmpfiles[1274]: Skipping /boot Jan 29 16:25:33.747652 zram_generator::config[1306]: No configuration found. Jan 29 16:25:33.859567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:33.927016 systemd[1]: Reloading finished in 242 ms. Jan 29 16:25:33.939483 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:25:33.958483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:33.967711 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:33.970292 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:25:33.973796 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:25:33.977717 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:25:33.981945 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:33.984953 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:25:33.990252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:33.990425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:33.995455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:34.000904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:34.004584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:34.006024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:34.006293 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:34.008835 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:25:34.012043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:34.013958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:25:34.016058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:34.016267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:34.018235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:34.018440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:34.021084 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:34.021394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:34.027541 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Jan 29 16:25:34.034681 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:34.034914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:34.043928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:34.047574 augenrules[1376]: No rules Jan 29 16:25:34.049519 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:34.053128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:34.054792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:34.054919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:34.057050 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:25:34.059079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:34.060823 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:34.061230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:34.062886 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:25:34.065209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:34.065438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:34.066988 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:25:34.068333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:34.070146 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:25:34.072965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:34.073175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:34.074923 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:34.075143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:34.077101 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:25:34.102682 systemd[1]: Finished ensure-sysext.service. Jan 29 16:25:34.107441 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:34.117563 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:34.119012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:34.120708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:34.124905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:34.130263 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:34.134073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:34.137875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:34.137932 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:34.140440 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:25:34.145844 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:25:34.147167 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:25:34.147210 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:34.148076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:34.148385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:34.149192 augenrules[1416]: /sbin/augenrules: No change Jan 29 16:25:34.151211 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:34.151495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:34.159163 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:25:34.160876 augenrules[1440]: No rules Jan 29 16:25:34.165066 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:34.165373 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:34.168445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:34.168763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:34.174438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:34.174889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:34.182655 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1389) Jan 29 16:25:34.190966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:34.191046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:34.205821 systemd-resolved[1345]: Positive Trust Anchors: Jan 29 16:25:34.206147 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:25:34.206220 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:25:34.210398 systemd-resolved[1345]: Defaulting to hostname 'linux'. Jan 29 16:25:34.217088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:25:34.218646 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:25:34.220200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:25:34.223646 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:25:34.233080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:34.243812 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:25:34.253672 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:25:34.256467 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:25:34.256729 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:25:34.259723 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:25:34.263801 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:25:34.268873 systemd-networkd[1428]: lo: Link UP Jan 29 16:25:34.268883 systemd-networkd[1428]: lo: Gained carrier Jan 29 16:25:34.271141 systemd-networkd[1428]: Enumeration completed Jan 29 16:25:34.271250 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:25:34.272914 systemd[1]: Reached target network.target - Network. Jan 29 16:25:34.275211 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:34.275219 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:25:34.276612 systemd-networkd[1428]: eth0: Link UP Jan 29 16:25:34.276680 systemd-networkd[1428]: eth0: Gained carrier Jan 29 16:25:34.276731 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:34.285851 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:25:34.292316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:25:34.292677 systemd-networkd[1428]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:25:34.293920 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Jan 29 16:25:34.294024 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:25:35.359104 systemd-resolved[1345]: Clock change detected. Flushing caches. Jan 29 16:25:35.359236 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 16:25:35.359278 systemd-timesyncd[1431]: Initial clock synchronization to Wed 2025-01-29 16:25:35.359075 UTC. Jan 29 16:25:35.372921 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:25:35.381923 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:25:35.388513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:35.406243 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:25:35.477160 kernel: kvm_amd: TSC scaling supported Jan 29 16:25:35.477255 kernel: kvm_amd: Nested Virtualization enabled Jan 29 16:25:35.477289 kernel: kvm_amd: Nested Paging enabled Jan 29 16:25:35.477320 kernel: kvm_amd: LBR virtualization supported Jan 29 16:25:35.477879 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 16:25:35.479246 kernel: kvm_amd: Virtual GIF supported Jan 29 16:25:35.500821 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:25:35.542698 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:25:35.545704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:35.560211 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:25:35.568109 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:35.600523 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:25:35.602462 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:35.603883 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:25:35.605180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:25:35.606481 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:25:35.608326 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:25:35.609803 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:25:35.611329 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:25:35.612859 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:25:35.612904 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:25:35.614071 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:25:35.616397 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:25:35.619411 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:25:35.623192 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:25:35.624751 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:25:35.626131 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:25:35.630106 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:25:35.631610 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:25:35.634318 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:25:35.635993 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:25:35.637191 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:25:35.638194 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:25:35.639191 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:35.639222 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:35.640215 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:25:35.642344 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:25:35.644952 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:35.646222 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:25:35.649035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:25:35.650131 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:25:35.651967 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:25:35.656816 jq[1484]: false Jan 29 16:25:35.657692 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:25:35.660956 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:25:35.666091 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:25:35.671886 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:25:35.673817 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:25:35.674268 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:25:35.674931 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:25:35.676385 dbus-daemon[1483]: [system] SELinux support is enabled Jan 29 16:25:35.676920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:25:35.679819 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:25:35.684702 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:25:35.684886 extend-filesystems[1485]: Found loop3 Jan 29 16:25:35.684886 extend-filesystems[1485]: Found loop4 Jan 29 16:25:35.684886 extend-filesystems[1485]: Found loop5 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found sr0 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda1 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda2 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda3 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found usr Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda4 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda6 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda7 Jan 29 16:25:35.689240 extend-filesystems[1485]: Found vda9 Jan 29 16:25:35.689240 extend-filesystems[1485]: Checking size of /dev/vda9 Jan 29 16:25:35.687664 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:25:35.688580 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:25:35.698454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:25:35.713226 jq[1497]: true Jan 29 16:25:35.698848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:25:35.706641 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:25:35.706684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:25:35.709385 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:25:35.709406 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:25:35.710167 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:25:35.710507 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:25:35.718874 update_engine[1495]: I20250129 16:25:35.715703 1495 main.cc:92] Flatcar Update Engine starting Jan 29 16:25:35.718874 update_engine[1495]: I20250129 16:25:35.717628 1495 update_check_scheduler.cc:74] Next update check in 9m0s Jan 29 16:25:35.723052 extend-filesystems[1485]: Resized partition /dev/vda9 Jan 29 16:25:35.723971 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:25:35.727535 jq[1514]: true Jan 29 16:25:35.730817 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 16:25:35.730851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1406) Jan 29 16:25:35.739322 tar[1501]: linux-amd64/LICENSE Jan 29 16:25:35.739322 tar[1501]: linux-amd64/helm Jan 29 16:25:35.746300 (ntainerd)[1516]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:25:35.751374 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:25:35.757122 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:25:35.769191 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 16:25:35.793179 systemd-logind[1493]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:25:35.794636 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:25:35.794989 systemd-logind[1493]: New seat seat0. Jan 29 16:25:35.795521 extend-filesystems[1515]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:25:35.795521 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:25:35.795521 extend-filesystems[1515]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 16:25:35.801689 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jan 29 16:25:35.798169 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:25:35.798520 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:25:35.804363 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:25:35.831878 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:25:35.833803 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:25:35.836157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:25:35.839249 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:25:35.844961 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:25:35.872400 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:25:35.883124 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:25:35.893072 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:25:35.893380 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:25:35.903733 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:25:35.915928 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:25:35.923097 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:25:35.925132 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:25:35.926774 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:25:35.965336 containerd[1516]: time="2025-01-29T16:25:35.965232015Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:25:35.994969 containerd[1516]: time="2025-01-29T16:25:35.994904882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997094 containerd[1516]: time="2025-01-29T16:25:35.997035368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997094 containerd[1516]: time="2025-01-29T16:25:35.997080913Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:25:35.997094 containerd[1516]: time="2025-01-29T16:25:35.997096182Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:25:35.997323 containerd[1516]: time="2025-01-29T16:25:35.997297329Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:25:35.997323 containerd[1516]: time="2025-01-29T16:25:35.997320893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997416 containerd[1516]: time="2025-01-29T16:25:35.997396104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997451 containerd[1516]: time="2025-01-29T16:25:35.997415150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997734 containerd[1516]: time="2025-01-29T16:25:35.997703400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997734 containerd[1516]: time="2025-01-29T16:25:35.997725622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997817 containerd[1516]: time="2025-01-29T16:25:35.997741011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997817 containerd[1516]: time="2025-01-29T16:25:35.997753464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:35.997911 containerd[1516]: time="2025-01-29T16:25:35.997885292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:35.998209 containerd[1516]: time="2025-01-29T16:25:35.998179704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:35.998394 containerd[1516]: time="2025-01-29T16:25:35.998368428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:35.998394 containerd[1516]: time="2025-01-29T16:25:35.998387052Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:25:35.998523 containerd[1516]: time="2025-01-29T16:25:35.998500736Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:25:35.998599 containerd[1516]: time="2025-01-29T16:25:35.998574554Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:25:36.005038 containerd[1516]: time="2025-01-29T16:25:36.004984766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:25:36.005111 containerd[1516]: time="2025-01-29T16:25:36.005055218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:25:36.005111 containerd[1516]: time="2025-01-29T16:25:36.005082709Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:25:36.005111 containerd[1516]: time="2025-01-29T16:25:36.005102366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:25:36.005201 containerd[1516]: time="2025-01-29T16:25:36.005118707Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:25:36.005310 containerd[1516]: time="2025-01-29T16:25:36.005282524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:25:36.005552 containerd[1516]: time="2025-01-29T16:25:36.005519388Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:25:36.005677 containerd[1516]: time="2025-01-29T16:25:36.005650724Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:25:36.005677 containerd[1516]: time="2025-01-29T16:25:36.005673708Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:25:36.005740 containerd[1516]: time="2025-01-29T16:25:36.005692222Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:25:36.005740 containerd[1516]: time="2025-01-29T16:25:36.005709545Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.005740 containerd[1516]: time="2025-01-29T16:25:36.005725074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.005844 containerd[1516]: time="2025-01-29T16:25:36.005741034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.005844 containerd[1516]: time="2025-01-29T16:25:36.005758286Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.005844 containerd[1516]: time="2025-01-29T16:25:36.005775348Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.005844 containerd[1516]: time="2025-01-29T16:25:36.005807148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.005844 containerd[1516]: time="2025-01-29T16:25:36.005824751Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.005844 containerd[1516]: time="2025-01-29T16:25:36.005839398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005863594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005885685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005902126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005917755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005933114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005948813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005964112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005980012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006010 containerd[1516]: time="2025-01-29T16:25:36.005997966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006029795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006046677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006062276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006077124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006094777Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006118221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006136185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.006314 containerd[1516]: time="2025-01-29T16:25:36.006149279Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:25:36.006918 containerd[1516]: time="2025-01-29T16:25:36.006896891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:25:36.006974 containerd[1516]: time="2025-01-29T16:25:36.006927900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:25:36.006974 containerd[1516]: time="2025-01-29T16:25:36.006942888Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:25:36.006974 containerd[1516]: time="2025-01-29T16:25:36.006958106Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:25:36.006974 containerd[1516]: time="2025-01-29T16:25:36.006970349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.007107 containerd[1516]: time="2025-01-29T16:25:36.006986129Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:25:36.007107 containerd[1516]: time="2025-01-29T16:25:36.006999213Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:25:36.007107 containerd[1516]: time="2025-01-29T16:25:36.007022868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:25:36.007378 containerd[1516]: time="2025-01-29T16:25:36.007325215Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:25:36.007378 containerd[1516]: time="2025-01-29T16:25:36.007374667Z" level=info msg="Connect containerd service" Jan 29 16:25:36.007590 containerd[1516]: time="2025-01-29T16:25:36.007408371Z" level=info msg="using legacy CRI server" Jan 29 16:25:36.007590 containerd[1516]: time="2025-01-29T16:25:36.007417708Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:25:36.007590 containerd[1516]: time="2025-01-29T16:25:36.007546149Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008224090Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008526778Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008568617Z" level=info msg="Start subscribing containerd event" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008624912Z" level=info msg="Start recovering state" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008582192Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008697148Z" level=info msg="Start event monitor" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008709401Z" level=info msg="Start snapshots syncer" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008721654Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008731111Z" level=info msg="Start streaming server" Jan 29 16:25:36.008958 containerd[1516]: time="2025-01-29T16:25:36.008810260Z" level=info msg="containerd successfully booted in 0.044709s" Jan 29 16:25:36.012425 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:25:36.015045 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:25:36.017567 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:51886.service - OpenSSH per-connection server daemon (10.0.0.1:51886). Jan 29 16:25:36.067370 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 51886 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:36.069361 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:36.076910 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:25:36.090449 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:25:36.099699 systemd-logind[1493]: New session 1 of user core. Jan 29 16:25:36.106912 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:25:36.117136 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:25:36.121876 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:25:36.124498 systemd-logind[1493]: New session c1 of user core. Jan 29 16:25:36.208331 tar[1501]: linux-amd64/README.md Jan 29 16:25:36.226860 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:25:36.283090 systemd[1576]: Queued start job for default target default.target. Jan 29 16:25:36.299226 systemd[1576]: Created slice app.slice - User Application Slice. Jan 29 16:25:36.299252 systemd[1576]: Reached target paths.target - Paths. Jan 29 16:25:36.299295 systemd[1576]: Reached target timers.target - Timers. Jan 29 16:25:36.300925 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:25:36.312304 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:25:36.312442 systemd[1576]: Reached target sockets.target - Sockets. Jan 29 16:25:36.312488 systemd[1576]: Reached target basic.target - Basic System. Jan 29 16:25:36.312531 systemd[1576]: Reached target default.target - Main User Target. Jan 29 16:25:36.312563 systemd[1576]: Startup finished in 178ms. Jan 29 16:25:36.313342 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:25:36.316386 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:25:36.379210 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:51892.service - OpenSSH per-connection server daemon (10.0.0.1:51892). Jan 29 16:25:36.422983 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 51892 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:36.424529 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:36.428831 systemd-logind[1493]: New session 2 of user core. Jan 29 16:25:36.437932 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:25:36.492217 sshd[1592]: Connection closed by 10.0.0.1 port 51892 Jan 29 16:25:36.492578 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:36.508658 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:51892.service: Deactivated successfully. Jan 29 16:25:36.510454 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:25:36.511776 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:25:36.520086 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:51896.service - OpenSSH per-connection server daemon (10.0.0.1:51896). Jan 29 16:25:36.522641 systemd-logind[1493]: Removed session 2. Jan 29 16:25:36.544907 systemd-networkd[1428]: eth0: Gained IPv6LL Jan 29 16:25:36.548128 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:25:36.550133 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:25:36.554921 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 51896 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:36.556745 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:36.560080 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:25:36.563104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:36.565496 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:25:36.581714 systemd-logind[1493]: New session 3 of user core. Jan 29 16:25:36.582325 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:25:36.587228 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:25:36.587558 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:25:36.590727 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:25:36.595067 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:25:36.645010 sshd[1616]: Connection closed by 10.0.0.1 port 51896 Jan 29 16:25:36.645714 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:36.650780 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:51896.service: Deactivated successfully. Jan 29 16:25:36.653285 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:25:36.654089 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:25:36.654977 systemd-logind[1493]: Removed session 3. Jan 29 16:25:37.283938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:37.285750 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:25:37.288138 systemd[1]: Startup finished in 737ms (kernel) + 6.517s (initrd) + 4.041s (userspace) = 11.296s. Jan 29 16:25:37.315160 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:37.729747 kubelet[1627]: E0129 16:25:37.729594 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:37.734158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:37.734391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:37.734858 systemd[1]: kubelet.service: Consumed 1.012s CPU time, 257.6M memory peak. Jan 29 16:25:46.658046 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:50424.service - OpenSSH per-connection server daemon (10.0.0.1:50424). Jan 29 16:25:46.694969 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 50424 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:46.696874 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:46.701368 systemd-logind[1493]: New session 4 of user core. Jan 29 16:25:46.710925 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:25:46.764500 sshd[1642]: Connection closed by 10.0.0.1 port 50424 Jan 29 16:25:46.764919 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:46.782889 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:50424.service: Deactivated successfully. Jan 29 16:25:46.785044 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:25:46.786641 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:25:46.799032 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:50430.service - OpenSSH per-connection server daemon (10.0.0.1:50430). Jan 29 16:25:46.799877 systemd-logind[1493]: Removed session 4. Jan 29 16:25:46.832741 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 50430 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:46.834222 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:46.838636 systemd-logind[1493]: New session 5 of user core. Jan 29 16:25:46.847926 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:25:46.898659 sshd[1650]: Connection closed by 10.0.0.1 port 50430 Jan 29 16:25:46.899649 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:46.912839 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:50430.service: Deactivated successfully. Jan 29 16:25:46.914763 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:25:46.915539 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:25:46.926034 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:50438.service - OpenSSH per-connection server daemon (10.0.0.1:50438). Jan 29 16:25:46.926556 systemd-logind[1493]: Removed session 5. Jan 29 16:25:46.959244 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 50438 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:46.960859 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:46.964683 systemd-logind[1493]: New session 6 of user core. Jan 29 16:25:46.974911 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:25:47.028401 sshd[1658]: Connection closed by 10.0.0.1 port 50438 Jan 29 16:25:47.028790 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:47.044883 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:50438.service: Deactivated successfully. Jan 29 16:25:47.046733 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:25:47.048122 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:25:47.049303 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:50442.service - OpenSSH per-connection server daemon (10.0.0.1:50442). Jan 29 16:25:47.050189 systemd-logind[1493]: Removed session 6. Jan 29 16:25:47.086272 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:47.087833 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:47.092540 systemd-logind[1493]: New session 7 of user core. Jan 29 16:25:47.101975 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:25:47.159394 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:25:47.159717 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:47.177851 sudo[1667]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:47.179379 sshd[1666]: Connection closed by 10.0.0.1 port 50442 Jan 29 16:25:47.179883 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:47.191557 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:50442.service: Deactivated successfully. Jan 29 16:25:47.193380 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:25:47.195124 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:25:47.196471 systemd[1]: Started sshd@7-10.0.0.146:22-10.0.0.1:50450.service - OpenSSH per-connection server daemon (10.0.0.1:50450). Jan 29 16:25:47.197201 systemd-logind[1493]: Removed session 7. Jan 29 16:25:47.233884 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 50450 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:47.235238 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:47.239259 systemd-logind[1493]: New session 8 of user core. Jan 29 16:25:47.248913 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:25:47.301641 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:25:47.301991 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:47.305768 sudo[1677]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:47.312546 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:25:47.312909 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:47.336157 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:47.365973 augenrules[1699]: No rules Jan 29 16:25:47.367748 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:47.368042 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:47.369209 sudo[1676]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:47.370684 sshd[1675]: Connection closed by 10.0.0.1 port 50450 Jan 29 16:25:47.371056 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:47.383586 systemd[1]: sshd@7-10.0.0.146:22-10.0.0.1:50450.service: Deactivated successfully. Jan 29 16:25:47.385479 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:25:47.387200 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:25:47.399060 systemd[1]: Started sshd@8-10.0.0.146:22-10.0.0.1:34284.service - OpenSSH per-connection server daemon (10.0.0.1:34284). Jan 29 16:25:47.400021 systemd-logind[1493]: Removed session 8. Jan 29 16:25:47.432133 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 34284 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:47.433418 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:47.437469 systemd-logind[1493]: New session 9 of user core. Jan 29 16:25:47.446921 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:25:47.499343 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:25:47.499691 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:47.782262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:47.791008 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:25:47.791234 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:25:47.792145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:47.965206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:47.969816 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:48.029130 kubelet[1744]: E0129 16:25:48.029060 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:48.036332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:48.036585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:48.037077 systemd[1]: kubelet.service: Consumed 237ms CPU time, 102M memory peak. Jan 29 16:25:48.543608 dockerd[1730]: time="2025-01-29T16:25:48.543499196Z" level=info msg="Starting up" Jan 29 16:25:49.210224 dockerd[1730]: time="2025-01-29T16:25:49.210171995Z" level=info msg="Loading containers: start." Jan 29 16:25:49.386828 kernel: Initializing XFRM netlink socket Jan 29 16:25:49.470409 systemd-networkd[1428]: docker0: Link UP Jan 29 16:25:49.519171 dockerd[1730]: time="2025-01-29T16:25:49.519129701Z" level=info msg="Loading containers: done." Jan 29 16:25:49.532833 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2904148207-merged.mount: Deactivated successfully. Jan 29 16:25:49.533249 dockerd[1730]: time="2025-01-29T16:25:49.533176815Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:25:49.533307 dockerd[1730]: time="2025-01-29T16:25:49.533284086Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:25:49.533412 dockerd[1730]: time="2025-01-29T16:25:49.533394252Z" level=info msg="Daemon has completed initialization" Jan 29 16:25:49.689558 dockerd[1730]: time="2025-01-29T16:25:49.689483806Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:25:49.689755 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:25:50.296723 containerd[1516]: time="2025-01-29T16:25:50.296676800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 16:25:50.971214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110553037.mount: Deactivated successfully. Jan 29 16:25:52.487184 containerd[1516]: time="2025-01-29T16:25:52.487127336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:52.487934 containerd[1516]: time="2025-01-29T16:25:52.487889986Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 29 16:25:52.489054 containerd[1516]: time="2025-01-29T16:25:52.489008444Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:52.491625 containerd[1516]: time="2025-01-29T16:25:52.491589214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:52.492830 containerd[1516]: time="2025-01-29T16:25:52.492787231Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 2.196070416s" Jan 29 16:25:52.492869 containerd[1516]: time="2025-01-29T16:25:52.492835531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 16:25:52.493901 containerd[1516]: time="2025-01-29T16:25:52.493861776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 16:25:55.950264 containerd[1516]: time="2025-01-29T16:25:55.950182140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:55.953167 containerd[1516]: time="2025-01-29T16:25:55.953087278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 29 16:25:55.954420 containerd[1516]: time="2025-01-29T16:25:55.954386695Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:55.957212 containerd[1516]: time="2025-01-29T16:25:55.957172890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:55.958182 containerd[1516]: time="2025-01-29T16:25:55.958138421Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 3.464227944s" Jan 29 16:25:55.958182 containerd[1516]: time="2025-01-29T16:25:55.958176923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 16:25:55.958695 containerd[1516]: time="2025-01-29T16:25:55.958671060Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 16:25:57.643012 containerd[1516]: time="2025-01-29T16:25:57.642932451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:57.679031 containerd[1516]: time="2025-01-29T16:25:57.678965917Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 29 16:25:57.692143 containerd[1516]: time="2025-01-29T16:25:57.692087254Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:57.708754 containerd[1516]: time="2025-01-29T16:25:57.708585304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:57.709931 containerd[1516]: time="2025-01-29T16:25:57.709621427Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.750917695s" Jan 29 16:25:57.709931 containerd[1516]: time="2025-01-29T16:25:57.709685477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 16:25:57.710448 containerd[1516]: time="2025-01-29T16:25:57.710394607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 16:25:58.252258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:25:58.270016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:58.425508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:58.479995 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:58.542891 kubelet[2012]: E0129 16:25:58.542060 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:58.546712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:58.546941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:58.547335 systemd[1]: kubelet.service: Consumed 276ms CPU time, 106.2M memory peak. Jan 29 16:26:02.632898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915089285.mount: Deactivated successfully. Jan 29 16:26:03.579989 containerd[1516]: time="2025-01-29T16:26:03.579928779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:03.580714 containerd[1516]: time="2025-01-29T16:26:03.580676942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 16:26:03.581772 containerd[1516]: time="2025-01-29T16:26:03.581737731Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:03.583858 containerd[1516]: time="2025-01-29T16:26:03.583825767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:03.584559 containerd[1516]: time="2025-01-29T16:26:03.584528384Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 5.873969901s" Jan 29 16:26:03.584622 containerd[1516]: time="2025-01-29T16:26:03.584563210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 16:26:03.585059 containerd[1516]: time="2025-01-29T16:26:03.585033702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 16:26:04.190688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698858927.mount: Deactivated successfully. Jan 29 16:26:05.309762 containerd[1516]: time="2025-01-29T16:26:05.309690279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:05.310925 containerd[1516]: time="2025-01-29T16:26:05.310844604Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 29 16:26:05.312623 containerd[1516]: time="2025-01-29T16:26:05.312574007Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:05.316084 containerd[1516]: time="2025-01-29T16:26:05.316035138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:05.317290 containerd[1516]: time="2025-01-29T16:26:05.317254094Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.732182861s" Jan 29 16:26:05.317290 containerd[1516]: time="2025-01-29T16:26:05.317287717Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 16:26:05.318192 containerd[1516]: time="2025-01-29T16:26:05.318157267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:26:05.800453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314905492.mount: Deactivated successfully. Jan 29 16:26:05.806326 containerd[1516]: time="2025-01-29T16:26:05.806280913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:05.807136 containerd[1516]: time="2025-01-29T16:26:05.807076816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 16:26:05.808278 containerd[1516]: time="2025-01-29T16:26:05.808245287Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:05.810462 containerd[1516]: time="2025-01-29T16:26:05.810427390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:05.811148 containerd[1516]: time="2025-01-29T16:26:05.811102155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.913599ms" Jan 29 16:26:05.811148 containerd[1516]: time="2025-01-29T16:26:05.811142661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:26:05.811685 containerd[1516]: time="2025-01-29T16:26:05.811663808Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 16:26:06.750937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728484341.mount: Deactivated successfully. Jan 29 16:26:08.752247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:26:08.765001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:08.918589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:08.922615 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:26:09.473100 kubelet[2145]: E0129 16:26:09.472853 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:26:09.477782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:26:09.478029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:26:09.478463 systemd[1]: kubelet.service: Consumed 218ms CPU time, 105.3M memory peak. Jan 29 16:26:10.441950 containerd[1516]: time="2025-01-29T16:26:10.441868227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:10.442768 containerd[1516]: time="2025-01-29T16:26:10.442696048Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 29 16:26:10.444134 containerd[1516]: time="2025-01-29T16:26:10.444092091Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:10.447656 containerd[1516]: time="2025-01-29T16:26:10.447593742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:10.449029 containerd[1516]: time="2025-01-29T16:26:10.448989003Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.637296992s" Jan 29 16:26:10.449081 containerd[1516]: time="2025-01-29T16:26:10.449042535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 16:26:12.761742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:12.761990 systemd[1]: kubelet.service: Consumed 218ms CPU time, 105.3M memory peak. Jan 29 16:26:12.774265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:12.822262 systemd[1]: Reload requested from client PID 2186 ('systemctl') (unit session-9.scope)... Jan 29 16:26:12.822286 systemd[1]: Reloading... Jan 29 16:26:12.948832 zram_generator::config[2233]: No configuration found. Jan 29 16:26:13.315423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:26:13.462819 systemd[1]: Reloading finished in 639 ms. Jan 29 16:26:13.530093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:13.539908 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:26:13.541530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:13.542055 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:26:13.542453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:13.542513 systemd[1]: kubelet.service: Consumed 181ms CPU time, 91.8M memory peak. Jan 29 16:26:13.563281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:13.749261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:13.754062 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:26:13.814384 kubelet[2281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:13.814384 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:26:13.814384 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:13.814884 kubelet[2281]: I0129 16:26:13.814442 2281 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:26:14.210961 kubelet[2281]: I0129 16:26:14.210920 2281 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:26:14.210961 kubelet[2281]: I0129 16:26:14.210949 2281 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:26:14.211212 kubelet[2281]: I0129 16:26:14.211195 2281 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:26:14.237336 kubelet[2281]: E0129 16:26:14.237287 2281 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:14.237863 kubelet[2281]: I0129 16:26:14.237835 2281 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:26:14.244567 kubelet[2281]: E0129 16:26:14.244521 2281 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:26:14.244567 kubelet[2281]: I0129 16:26:14.244550 2281 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:26:14.249598 kubelet[2281]: I0129 16:26:14.249570 2281 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:26:14.284508 kubelet[2281]: I0129 16:26:14.284422 2281 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:26:14.284714 kubelet[2281]: I0129 16:26:14.284502 2281 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:26:14.284868 kubelet[2281]: I0129 16:26:14.284718 2281 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:26:14.284868 kubelet[2281]: I0129 16:26:14.284731 2281 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:26:14.285006 kubelet[2281]: I0129 16:26:14.284982 2281 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:14.288348 kubelet[2281]: I0129 16:26:14.288317 2281 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:26:14.288348 kubelet[2281]: I0129 16:26:14.288338 2281 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:26:14.288439 kubelet[2281]: I0129 16:26:14.288363 2281 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:26:14.288439 kubelet[2281]: I0129 16:26:14.288375 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:26:14.291654 kubelet[2281]: I0129 16:26:14.291618 2281 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:26:14.292067 kubelet[2281]: I0129 16:26:14.292036 2281 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:26:14.293354 kubelet[2281]: W0129 16:26:14.293308 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:14.294736 kubelet[2281]: E0129 16:26:14.293367 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:14.294817 kubelet[2281]: W0129 16:26:14.294588 2281 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:26:14.295702 kubelet[2281]: W0129 16:26:14.295011 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:14.295702 kubelet[2281]: E0129 16:26:14.295058 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:14.297312 kubelet[2281]: I0129 16:26:14.297050 2281 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:26:14.297312 kubelet[2281]: I0129 16:26:14.297091 2281 server.go:1287] "Started kubelet" Jan 29 16:26:14.301973 kubelet[2281]: I0129 16:26:14.301922 2281 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:26:14.302651 kubelet[2281]: I0129 16:26:14.302397 2281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:26:14.303782 kubelet[2281]: I0129 16:26:14.303064 2281 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:26:14.304715 kubelet[2281]: I0129 16:26:14.304681 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:26:14.306822 kubelet[2281]: E0129 16:26:14.304576 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f36903befde6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:26:14.297067116 +0000 UTC m=+0.535566048,LastTimestamp:2025-01-29 16:26:14.297067116 +0000 UTC m=+0.535566048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:26:14.306930 kubelet[2281]: I0129 16:26:14.306840 2281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:26:14.308965 kubelet[2281]: I0129 16:26:14.307914 2281 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:26:14.308965 kubelet[2281]: I0129 16:26:14.308018 2281 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:26:14.308965 kubelet[2281]: I0129 16:26:14.308029 2281 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:26:14.308965 kubelet[2281]: I0129 16:26:14.308104 2281 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:26:14.308965 kubelet[2281]: W0129 16:26:14.308434 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:14.308965 kubelet[2281]: E0129 16:26:14.308478 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:14.308965 kubelet[2281]: E0129 16:26:14.308664 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:14.308965 kubelet[2281]: E0129 16:26:14.308730 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="200ms" Jan 29 16:26:14.309587 kubelet[2281]: I0129 16:26:14.309570 2281 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:26:14.310043 kubelet[2281]: I0129 16:26:14.309648 2281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:26:14.311247 kubelet[2281]: E0129 16:26:14.311222 2281 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:26:14.313231 kubelet[2281]: I0129 16:26:14.313211 2281 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:26:14.325372 kubelet[2281]: I0129 16:26:14.325344 2281 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:26:14.325372 kubelet[2281]: I0129 16:26:14.325365 2281 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:26:14.325519 kubelet[2281]: I0129 16:26:14.325384 2281 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:14.326136 kubelet[2281]: I0129 16:26:14.326089 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:26:14.327370 kubelet[2281]: I0129 16:26:14.327340 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:26:14.327370 kubelet[2281]: I0129 16:26:14.327363 2281 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:26:14.327448 kubelet[2281]: I0129 16:26:14.327384 2281 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:26:14.327448 kubelet[2281]: I0129 16:26:14.327391 2281 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:26:14.327448 kubelet[2281]: E0129 16:26:14.327437 2281 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:26:14.330927 kubelet[2281]: W0129 16:26:14.330888 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:14.331279 kubelet[2281]: E0129 16:26:14.331223 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:14.408898 kubelet[2281]: E0129 16:26:14.408858 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:14.428038 kubelet[2281]: E0129 16:26:14.428012 2281 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:26:14.509441 kubelet[2281]: E0129 16:26:14.509402 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:14.509772 kubelet[2281]: E0129 16:26:14.509730 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="400ms" Jan 29 16:26:14.610102 kubelet[2281]: E0129 16:26:14.610055 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:14.628372 kubelet[2281]: E0129 16:26:14.628307 2281 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:26:14.710877 kubelet[2281]: E0129 16:26:14.710829 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:14.812033 kubelet[2281]: E0129 16:26:14.811922 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:14.910944 kubelet[2281]: E0129 16:26:14.910887 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="800ms" Jan 29 16:26:14.912965 kubelet[2281]: E0129 16:26:14.912907 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:15.013550 kubelet[2281]: E0129 16:26:15.013496 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:15.028682 kubelet[2281]: E0129 16:26:15.028643 2281 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:26:15.114540 kubelet[2281]: E0129 16:26:15.114365 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:15.128089 kubelet[2281]: W0129 16:26:15.128026 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:15.128089 kubelet[2281]: E0129 16:26:15.128079 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:15.214932 kubelet[2281]: E0129 16:26:15.214873 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:15.315494 kubelet[2281]: E0129 16:26:15.315437 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:15.416203 kubelet[2281]: E0129 16:26:15.416050 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:15.418641 kubelet[2281]: W0129 16:26:15.418612 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:15.418698 kubelet[2281]: E0129 16:26:15.418651 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:15.472389 kubelet[2281]: I0129 16:26:15.472345 2281 policy_none.go:49] "None policy: Start" Jan 29 16:26:15.472389 kubelet[2281]: I0129 16:26:15.472386 2281 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:26:15.472389 kubelet[2281]: I0129 16:26:15.472407 2281 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:26:15.479903 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:26:15.493660 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:26:15.505728 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:26:15.506770 kubelet[2281]: I0129 16:26:15.506745 2281 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:26:15.507012 kubelet[2281]: I0129 16:26:15.506992 2281 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:26:15.507057 kubelet[2281]: I0129 16:26:15.507010 2281 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:26:15.507661 kubelet[2281]: I0129 16:26:15.507228 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:26:15.507886 kubelet[2281]: E0129 16:26:15.507842 2281 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:26:15.507886 kubelet[2281]: E0129 16:26:15.507873 2281 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 16:26:15.609597 kubelet[2281]: I0129 16:26:15.609558 2281 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:26:15.609925 kubelet[2281]: E0129 16:26:15.609899 2281 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jan 29 16:26:15.701826 kubelet[2281]: W0129 16:26:15.701696 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:15.701826 kubelet[2281]: E0129 16:26:15.701753 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:15.711899 kubelet[2281]: E0129 16:26:15.711856 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="1.6s" Jan 29 16:26:15.811119 kubelet[2281]: I0129 16:26:15.811074 2281 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:26:15.811485 kubelet[2281]: E0129 16:26:15.811461 2281 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jan 29 16:26:15.815911 kubelet[2281]: W0129 16:26:15.815866 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:15.815958 kubelet[2281]: E0129 16:26:15.815917 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:15.838692 systemd[1]: Created slice kubepods-burstable-podd684646993c25a24ec48ffd91414d5ca.slice - libcontainer container kubepods-burstable-podd684646993c25a24ec48ffd91414d5ca.slice. Jan 29 16:26:15.850762 kubelet[2281]: E0129 16:26:15.850716 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:15.853137 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 16:26:15.862176 kubelet[2281]: E0129 16:26:15.862135 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:15.865071 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 16:26:15.866855 kubelet[2281]: E0129 16:26:15.866807 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:15.917128 kubelet[2281]: I0129 16:26:15.917074 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:15.917128 kubelet[2281]: I0129 16:26:15.917116 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:15.917505 kubelet[2281]: I0129 16:26:15.917155 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:15.917505 kubelet[2281]: I0129 16:26:15.917179 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d684646993c25a24ec48ffd91414d5ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d684646993c25a24ec48ffd91414d5ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:15.917505 kubelet[2281]: I0129 16:26:15.917246 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:15.917505 kubelet[2281]: I0129 16:26:15.917299 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:15.917505 kubelet[2281]: I0129 16:26:15.917325 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d684646993c25a24ec48ffd91414d5ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d684646993c25a24ec48ffd91414d5ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:15.917618 kubelet[2281]: I0129 16:26:15.917347 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d684646993c25a24ec48ffd91414d5ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d684646993c25a24ec48ffd91414d5ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:15.917618 kubelet[2281]: I0129 16:26:15.917371 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:16.152680 containerd[1516]: time="2025-01-29T16:26:16.152620550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d684646993c25a24ec48ffd91414d5ca,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:16.163189 containerd[1516]: time="2025-01-29T16:26:16.163142575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:16.167971 containerd[1516]: time="2025-01-29T16:26:16.167932860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:16.213891 kubelet[2281]: I0129 16:26:16.213847 2281 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:26:16.217800 kubelet[2281]: E0129 16:26:16.217758 2281 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jan 29 16:26:16.410527 kubelet[2281]: E0129 16:26:16.410393 2281 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:16.735779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321350909.mount: Deactivated successfully. Jan 29 16:26:16.745235 containerd[1516]: time="2025-01-29T16:26:16.745175122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:16.748173 containerd[1516]: time="2025-01-29T16:26:16.748123575Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:26:16.749235 containerd[1516]: time="2025-01-29T16:26:16.749188135Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:16.751119 containerd[1516]: time="2025-01-29T16:26:16.751086545Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:16.751931 containerd[1516]: time="2025-01-29T16:26:16.751872153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:26:16.752921 containerd[1516]: time="2025-01-29T16:26:16.752851300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:16.753611 containerd[1516]: time="2025-01-29T16:26:16.753568819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:26:16.755539 containerd[1516]: time="2025-01-29T16:26:16.755472139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:16.757053 containerd[1516]: time="2025-01-29T16:26:16.757020231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.008511ms" Jan 29 16:26:16.758515 containerd[1516]: time="2025-01-29T16:26:16.758486317Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 605.749315ms" Jan 29 16:26:16.759989 containerd[1516]: time="2025-01-29T16:26:16.759948396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.742248ms" Jan 29 16:26:16.934707 containerd[1516]: time="2025-01-29T16:26:16.932822921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:16.934707 containerd[1516]: time="2025-01-29T16:26:16.934375481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:16.934707 containerd[1516]: time="2025-01-29T16:26:16.934405879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:16.934707 containerd[1516]: time="2025-01-29T16:26:16.934542180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:17.010247 systemd[1]: Started cri-containerd-e78799f998e107d8ecba3d7f437d00a7717343e77d174a961133f12a24ca5388.scope - libcontainer container e78799f998e107d8ecba3d7f437d00a7717343e77d174a961133f12a24ca5388. Jan 29 16:26:17.011826 containerd[1516]: time="2025-01-29T16:26:17.011482284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:17.011826 containerd[1516]: time="2025-01-29T16:26:17.011575773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:17.011826 containerd[1516]: time="2025-01-29T16:26:17.011592134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:17.011826 containerd[1516]: time="2025-01-29T16:26:17.011686965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:17.015371 containerd[1516]: time="2025-01-29T16:26:17.015254051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:17.015446 containerd[1516]: time="2025-01-29T16:26:17.015381112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:17.015446 containerd[1516]: time="2025-01-29T16:26:17.015419907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:17.018581 containerd[1516]: time="2025-01-29T16:26:17.015569060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:17.020898 kubelet[2281]: I0129 16:26:17.020427 2281 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:26:17.020898 kubelet[2281]: E0129 16:26:17.020847 2281 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jan 29 16:26:17.041939 systemd[1]: Started cri-containerd-218c968920f78cdf95ec9e563d212fd6b8ace3ab810cf9d61dd796ad15e0151f.scope - libcontainer container 218c968920f78cdf95ec9e563d212fd6b8ace3ab810cf9d61dd796ad15e0151f. Jan 29 16:26:17.045305 systemd[1]: Started cri-containerd-0bd9caa75e0ed680d30db07a40263fd207aac9f8db02525638f5a16f2de6db07.scope - libcontainer container 0bd9caa75e0ed680d30db07a40263fd207aac9f8db02525638f5a16f2de6db07. Jan 29 16:26:17.062007 containerd[1516]: time="2025-01-29T16:26:17.061886727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e78799f998e107d8ecba3d7f437d00a7717343e77d174a961133f12a24ca5388\"" Jan 29 16:26:17.065546 containerd[1516]: time="2025-01-29T16:26:17.065439946Z" level=info msg="CreateContainer within sandbox \"e78799f998e107d8ecba3d7f437d00a7717343e77d174a961133f12a24ca5388\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:26:17.166398 containerd[1516]: time="2025-01-29T16:26:17.165912561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d684646993c25a24ec48ffd91414d5ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"218c968920f78cdf95ec9e563d212fd6b8ace3ab810cf9d61dd796ad15e0151f\"" Jan 29 16:26:17.168275 containerd[1516]: time="2025-01-29T16:26:17.168199938Z" level=info msg="CreateContainer within sandbox \"218c968920f78cdf95ec9e563d212fd6b8ace3ab810cf9d61dd796ad15e0151f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:26:17.169005 containerd[1516]: time="2025-01-29T16:26:17.168975256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bd9caa75e0ed680d30db07a40263fd207aac9f8db02525638f5a16f2de6db07\"" Jan 29 16:26:17.170699 containerd[1516]: time="2025-01-29T16:26:17.170658272Z" level=info msg="CreateContainer within sandbox \"0bd9caa75e0ed680d30db07a40263fd207aac9f8db02525638f5a16f2de6db07\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:26:17.312892 kubelet[2281]: E0129 16:26:17.312724 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="3.2s" Jan 29 16:26:17.333286 kubelet[2281]: W0129 16:26:17.333209 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:17.333286 kubelet[2281]: E0129 16:26:17.333278 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:17.554866 kubelet[2281]: E0129 16:26:17.554706 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f36903befde6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:26:14.297067116 +0000 UTC m=+0.535566048,LastTimestamp:2025-01-29 16:26:14.297067116 +0000 UTC m=+0.535566048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:26:17.649384 containerd[1516]: time="2025-01-29T16:26:17.649250471Z" level=info msg="CreateContainer within sandbox \"e78799f998e107d8ecba3d7f437d00a7717343e77d174a961133f12a24ca5388\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a4d0384c1cdce0cf390173902b065cc8b49a755b5ce96a07c3d395dc1415a51\"" Jan 29 16:26:17.649997 containerd[1516]: time="2025-01-29T16:26:17.649963188Z" level=info msg="StartContainer for \"0a4d0384c1cdce0cf390173902b065cc8b49a755b5ce96a07c3d395dc1415a51\"" Jan 29 16:26:17.658572 containerd[1516]: time="2025-01-29T16:26:17.658540108Z" level=info msg="CreateContainer within sandbox \"218c968920f78cdf95ec9e563d212fd6b8ace3ab810cf9d61dd796ad15e0151f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"064891d360819cc4bf755afac9b8dc5fd6fff4409f30655b3bb0f5f3463840ef\"" Jan 29 16:26:17.658992 containerd[1516]: time="2025-01-29T16:26:17.658968374Z" level=info msg="StartContainer for \"064891d360819cc4bf755afac9b8dc5fd6fff4409f30655b3bb0f5f3463840ef\"" Jan 29 16:26:17.659828 containerd[1516]: time="2025-01-29T16:26:17.659765352Z" level=info msg="CreateContainer within sandbox \"0bd9caa75e0ed680d30db07a40263fd207aac9f8db02525638f5a16f2de6db07\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f6678d801c48d7b64371e30542f11185a86e714a418bbc97883c77f8482ad57e\"" Jan 29 16:26:17.660397 containerd[1516]: time="2025-01-29T16:26:17.660208917Z" level=info msg="StartContainer for \"f6678d801c48d7b64371e30542f11185a86e714a418bbc97883c77f8482ad57e\"" Jan 29 16:26:17.683138 systemd[1]: Started cri-containerd-0a4d0384c1cdce0cf390173902b065cc8b49a755b5ce96a07c3d395dc1415a51.scope - libcontainer container 0a4d0384c1cdce0cf390173902b065cc8b49a755b5ce96a07c3d395dc1415a51. Jan 29 16:26:17.696931 systemd[1]: Started cri-containerd-064891d360819cc4bf755afac9b8dc5fd6fff4409f30655b3bb0f5f3463840ef.scope - libcontainer container 064891d360819cc4bf755afac9b8dc5fd6fff4409f30655b3bb0f5f3463840ef. Jan 29 16:26:17.700587 systemd[1]: Started cri-containerd-f6678d801c48d7b64371e30542f11185a86e714a418bbc97883c77f8482ad57e.scope - libcontainer container f6678d801c48d7b64371e30542f11185a86e714a418bbc97883c77f8482ad57e. Jan 29 16:26:17.741705 containerd[1516]: time="2025-01-29T16:26:17.741658652Z" level=info msg="StartContainer for \"0a4d0384c1cdce0cf390173902b065cc8b49a755b5ce96a07c3d395dc1415a51\" returns successfully" Jan 29 16:26:17.756611 containerd[1516]: time="2025-01-29T16:26:17.756559348Z" level=info msg="StartContainer for \"f6678d801c48d7b64371e30542f11185a86e714a418bbc97883c77f8482ad57e\" returns successfully" Jan 29 16:26:17.756816 kubelet[2281]: W0129 16:26:17.756712 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Jan 29 16:26:17.756866 kubelet[2281]: E0129 16:26:17.756835 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:17.759923 containerd[1516]: time="2025-01-29T16:26:17.759861770Z" level=info msg="StartContainer for \"064891d360819cc4bf755afac9b8dc5fd6fff4409f30655b3bb0f5f3463840ef\" returns successfully" Jan 29 16:26:18.343668 kubelet[2281]: E0129 16:26:18.342839 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:18.345506 kubelet[2281]: E0129 16:26:18.344835 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:18.348098 kubelet[2281]: E0129 16:26:18.347946 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:18.622318 kubelet[2281]: I0129 16:26:18.622159 2281 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:26:19.348591 kubelet[2281]: E0129 16:26:19.348555 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:19.349130 kubelet[2281]: E0129 16:26:19.348700 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:19.349209 kubelet[2281]: E0129 16:26:19.349172 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:19.392113 kubelet[2281]: I0129 16:26:19.392042 2281 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 16:26:19.392113 kubelet[2281]: E0129 16:26:19.392101 2281 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 16:26:19.395608 kubelet[2281]: E0129 16:26:19.395569 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:19.496290 kubelet[2281]: E0129 16:26:19.496230 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:19.596876 kubelet[2281]: E0129 16:26:19.596840 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:19.697410 kubelet[2281]: E0129 16:26:19.697320 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:19.797426 kubelet[2281]: E0129 16:26:19.797393 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:19.898055 kubelet[2281]: E0129 16:26:19.897995 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:19.998694 kubelet[2281]: E0129 16:26:19.998579 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.099398 kubelet[2281]: E0129 16:26:20.099343 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.200417 kubelet[2281]: E0129 16:26:20.200358 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.300749 kubelet[2281]: E0129 16:26:20.300705 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.350648 kubelet[2281]: E0129 16:26:20.350599 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:20.350648 kubelet[2281]: E0129 16:26:20.350622 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:20.401895 kubelet[2281]: E0129 16:26:20.401834 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.548514 kubelet[2281]: E0129 16:26:20.548451 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.649698 kubelet[2281]: E0129 16:26:20.649561 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.750199 kubelet[2281]: E0129 16:26:20.750154 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.850863 kubelet[2281]: E0129 16:26:20.850810 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.951900 kubelet[2281]: E0129 16:26:20.951726 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:20.992918 update_engine[1495]: I20250129 16:26:20.992844 1495 update_attempter.cc:509] Updating boot flags... Jan 29 16:26:21.052139 kubelet[2281]: E0129 16:26:21.052091 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:21.143836 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2562) Jan 29 16:26:21.156712 kubelet[2281]: E0129 16:26:21.156652 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:21.258335 kubelet[2281]: E0129 16:26:21.258264 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:21.356244 kubelet[2281]: E0129 16:26:21.356202 2281 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:26:21.358914 kubelet[2281]: E0129 16:26:21.358894 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:21.459303 kubelet[2281]: E0129 16:26:21.459241 2281 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:21.609143 kubelet[2281]: I0129 16:26:21.608987 2281 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:21.619729 kubelet[2281]: I0129 16:26:21.619681 2281 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:21.624178 kubelet[2281]: I0129 16:26:21.624154 2281 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:21.693803 systemd[1]: Reload requested from client PID 2569 ('systemctl') (unit session-9.scope)... Jan 29 16:26:21.693820 systemd[1]: Reloading... Jan 29 16:26:21.785829 zram_generator::config[2614]: No configuration found. Jan 29 16:26:21.908081 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:26:22.025811 systemd[1]: Reloading finished in 331 ms. Jan 29 16:26:22.042205 kubelet[2281]: I0129 16:26:22.042152 2281 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:22.048178 kubelet[2281]: E0129 16:26:22.048152 2281 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:22.049772 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:22.065175 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:26:22.065455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:22.065498 systemd[1]: kubelet.service: Consumed 1.129s CPU time, 128M memory peak. Jan 29 16:26:22.076050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:22.239950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:22.244004 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:26:22.284891 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:22.284891 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:26:22.284891 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:22.284891 kubelet[2658]: I0129 16:26:22.284842 2658 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:26:22.292628 kubelet[2658]: I0129 16:26:22.292526 2658 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:26:22.292628 kubelet[2658]: I0129 16:26:22.292557 2658 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:26:22.292941 kubelet[2658]: I0129 16:26:22.292876 2658 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:26:22.294023 kubelet[2658]: I0129 16:26:22.293998 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:26:22.296218 kubelet[2658]: I0129 16:26:22.296115 2658 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:26:22.299825 kubelet[2658]: E0129 16:26:22.299772 2658 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:26:22.299958 kubelet[2658]: I0129 16:26:22.299946 2658 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:26:22.304739 kubelet[2658]: I0129 16:26:22.304719 2658 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:26:22.304976 kubelet[2658]: I0129 16:26:22.304949 2658 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:26:22.305111 kubelet[2658]: I0129 16:26:22.304976 2658 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:26:22.305188 kubelet[2658]: I0129 16:26:22.305117 2658 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:26:22.305188 kubelet[2658]: I0129 16:26:22.305125 2658 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:26:22.305188 kubelet[2658]: I0129 16:26:22.305161 2658 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:22.305323 kubelet[2658]: I0129 16:26:22.305312 2658 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:26:22.305371 kubelet[2658]: I0129 16:26:22.305326 2658 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:26:22.305371 kubelet[2658]: I0129 16:26:22.305341 2658 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:26:22.305371 kubelet[2658]: I0129 16:26:22.305350 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:26:22.306602 kubelet[2658]: I0129 16:26:22.306538 2658 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:26:22.306928 kubelet[2658]: I0129 16:26:22.306903 2658 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:26:22.307371 kubelet[2658]: I0129 16:26:22.307353 2658 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:26:22.307433 kubelet[2658]: I0129 16:26:22.307381 2658 server.go:1287] "Started kubelet" Jan 29 16:26:22.309831 kubelet[2658]: I0129 16:26:22.308100 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:26:22.309831 kubelet[2658]: I0129 16:26:22.308374 2658 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:26:22.309831 kubelet[2658]: I0129 16:26:22.308414 2658 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:26:22.309831 kubelet[2658]: I0129 16:26:22.308938 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:26:22.309831 kubelet[2658]: I0129 16:26:22.309223 2658 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:26:22.315587 kubelet[2658]: I0129 16:26:22.311763 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:26:22.317461 kubelet[2658]: I0129 16:26:22.317443 2658 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:26:22.317740 kubelet[2658]: E0129 16:26:22.317695 2658 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:26:22.318522 kubelet[2658]: I0129 16:26:22.318481 2658 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:26:22.318851 kubelet[2658]: I0129 16:26:22.318838 2658 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:26:22.324193 kubelet[2658]: I0129 16:26:22.323684 2658 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:26:22.324193 kubelet[2658]: I0129 16:26:22.323785 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:26:22.325168 kubelet[2658]: I0129 16:26:22.325088 2658 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:26:22.325168 kubelet[2658]: E0129 16:26:22.325127 2658 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:26:22.328753 kubelet[2658]: I0129 16:26:22.328663 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:26:22.331103 kubelet[2658]: I0129 16:26:22.330602 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:26:22.331103 kubelet[2658]: I0129 16:26:22.330641 2658 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:26:22.331103 kubelet[2658]: I0129 16:26:22.330672 2658 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:26:22.331103 kubelet[2658]: I0129 16:26:22.330684 2658 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:26:22.331103 kubelet[2658]: E0129 16:26:22.330739 2658 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:26:22.362257 kubelet[2658]: I0129 16:26:22.362227 2658 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:26:22.362425 kubelet[2658]: I0129 16:26:22.362413 2658 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:26:22.362504 kubelet[2658]: I0129 16:26:22.362495 2658 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:22.362726 kubelet[2658]: I0129 16:26:22.362712 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:26:22.362790 kubelet[2658]: I0129 16:26:22.362768 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:26:22.362872 kubelet[2658]: I0129 16:26:22.362862 2658 policy_none.go:49] "None policy: Start" Jan 29 16:26:22.362916 kubelet[2658]: I0129 16:26:22.362908 2658 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:26:22.362965 kubelet[2658]: I0129 16:26:22.362957 2658 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:26:22.363112 kubelet[2658]: I0129 16:26:22.363101 2658 state_mem.go:75] "Updated machine memory state" Jan 29 16:26:22.366915 kubelet[2658]: I0129 16:26:22.366845 2658 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:26:22.367034 kubelet[2658]: I0129 16:26:22.367018 2658 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:26:22.367086 kubelet[2658]: I0129 16:26:22.367033 2658 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:26:22.367723 kubelet[2658]: I0129 16:26:22.367344 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:26:22.367912 kubelet[2658]: E0129 16:26:22.367896 2658 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:26:22.432021 kubelet[2658]: I0129 16:26:22.431965 2658 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:22.432159 kubelet[2658]: I0129 16:26:22.431974 2658 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:22.432211 kubelet[2658]: I0129 16:26:22.431980 2658 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:22.438402 kubelet[2658]: E0129 16:26:22.438293 2658 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:22.438523 kubelet[2658]: E0129 16:26:22.438437 2658 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:22.438909 kubelet[2658]: E0129 16:26:22.438885 2658 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:22.469240 kubelet[2658]: I0129 16:26:22.469202 2658 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:26:22.475615 kubelet[2658]: I0129 16:26:22.475576 2658 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 16:26:22.475766 kubelet[2658]: I0129 16:26:22.475666 2658 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 16:26:22.520162 kubelet[2658]: I0129 16:26:22.520136 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:22.520286 kubelet[2658]: I0129 16:26:22.520165 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:22.520286 kubelet[2658]: I0129 16:26:22.520190 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:22.520286 kubelet[2658]: I0129 16:26:22.520207 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d684646993c25a24ec48ffd91414d5ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d684646993c25a24ec48ffd91414d5ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:22.520286 kubelet[2658]: I0129 16:26:22.520221 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d684646993c25a24ec48ffd91414d5ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d684646993c25a24ec48ffd91414d5ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:22.520286 kubelet[2658]: I0129 16:26:22.520236 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d684646993c25a24ec48ffd91414d5ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d684646993c25a24ec48ffd91414d5ca\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:22.520474 kubelet[2658]: I0129 16:26:22.520251 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:22.520474 kubelet[2658]: I0129 16:26:22.520276 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:22.520474 kubelet[2658]: I0129 16:26:22.520301 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:23.306341 kubelet[2658]: I0129 16:26:23.306277 2658 apiserver.go:52] "Watching apiserver" Jan 29 16:26:23.319196 kubelet[2658]: I0129 16:26:23.319162 2658 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:26:23.341695 kubelet[2658]: I0129 16:26:23.341661 2658 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:23.342444 kubelet[2658]: I0129 16:26:23.341837 2658 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:23.342444 kubelet[2658]: I0129 16:26:23.341973 2658 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:23.568402 kubelet[2658]: E0129 16:26:23.568262 2658 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:26:23.568402 kubelet[2658]: E0129 16:26:23.568314 2658 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:26:23.569225 kubelet[2658]: E0129 16:26:23.568568 2658 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 16:26:23.587233 kubelet[2658]: I0129 16:26:23.587153 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.587104696 podStartE2EDuration="2.587104696s" podCreationTimestamp="2025-01-29 16:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:23.587083416 +0000 UTC m=+1.338601548" watchObservedRunningTime="2025-01-29 16:26:23.587104696 +0000 UTC m=+1.338622838" Jan 29 16:26:23.603991 kubelet[2658]: I0129 16:26:23.603552 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.603531012 podStartE2EDuration="2.603531012s" podCreationTimestamp="2025-01-29 16:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:23.596224102 +0000 UTC m=+1.347742234" watchObservedRunningTime="2025-01-29 16:26:23.603531012 +0000 UTC m=+1.355049144" Jan 29 16:26:23.611102 kubelet[2658]: I0129 16:26:23.610406 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.610391296 podStartE2EDuration="2.610391296s" podCreationTimestamp="2025-01-29 16:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:23.603607778 +0000 UTC m=+1.355125910" watchObservedRunningTime="2025-01-29 16:26:23.610391296 +0000 UTC m=+1.361909428" Jan 29 16:26:28.017867 kubelet[2658]: I0129 16:26:28.017826 2658 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:26:28.019144 kubelet[2658]: I0129 16:26:28.018577 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:26:28.019196 containerd[1516]: time="2025-01-29T16:26:28.018321595Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:26:28.086612 sudo[1711]: pam_unix(sudo:session): session closed for user root Jan 29 16:26:28.088274 sshd[1710]: Connection closed by 10.0.0.1 port 34284 Jan 29 16:26:28.088846 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:28.092971 systemd[1]: sshd@8-10.0.0.146:22-10.0.0.1:34284.service: Deactivated successfully. Jan 29 16:26:28.095117 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:26:28.095342 systemd[1]: session-9.scope: Consumed 4.696s CPU time, 211.2M memory peak. Jan 29 16:26:28.096630 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:26:28.097613 systemd-logind[1493]: Removed session 9. Jan 29 16:26:28.952482 systemd[1]: Created slice kubepods-besteffort-pod01fc2fa7_647d_4fa3_bfa1_9dd0a90a9675.slice - libcontainer container kubepods-besteffort-pod01fc2fa7_647d_4fa3_bfa1_9dd0a90a9675.slice. Jan 29 16:26:28.964952 kubelet[2658]: I0129 16:26:28.964898 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675-xtables-lock\") pod \"kube-proxy-d5tpx\" (UID: \"01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675\") " pod="kube-system/kube-proxy-d5tpx" Jan 29 16:26:28.965092 kubelet[2658]: I0129 16:26:28.964958 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675-lib-modules\") pod \"kube-proxy-d5tpx\" (UID: \"01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675\") " pod="kube-system/kube-proxy-d5tpx" Jan 29 16:26:28.965092 kubelet[2658]: I0129 16:26:28.964985 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675-kube-proxy\") pod \"kube-proxy-d5tpx\" (UID: \"01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675\") " pod="kube-system/kube-proxy-d5tpx" Jan 29 16:26:28.965092 kubelet[2658]: I0129 16:26:28.965006 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf8h4\" (UniqueName: \"kubernetes.io/projected/01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675-kube-api-access-vf8h4\") pod \"kube-proxy-d5tpx\" (UID: \"01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675\") " pod="kube-system/kube-proxy-d5tpx" Jan 29 16:26:29.156177 systemd[1]: Created slice kubepods-besteffort-pod4e279ec4_8c82_47b9_b813_056a3feb7ea2.slice - libcontainer container kubepods-besteffort-pod4e279ec4_8c82_47b9_b813_056a3feb7ea2.slice. Jan 29 16:26:29.166588 kubelet[2658]: I0129 16:26:29.166557 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e279ec4-8c82-47b9-b813-056a3feb7ea2-var-lib-calico\") pod \"tigera-operator-7d68577dc5-x4p6j\" (UID: \"4e279ec4-8c82-47b9-b813-056a3feb7ea2\") " pod="tigera-operator/tigera-operator-7d68577dc5-x4p6j" Jan 29 16:26:29.166588 kubelet[2658]: I0129 16:26:29.166595 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9r4m\" (UniqueName: \"kubernetes.io/projected/4e279ec4-8c82-47b9-b813-056a3feb7ea2-kube-api-access-r9r4m\") pod \"tigera-operator-7d68577dc5-x4p6j\" (UID: \"4e279ec4-8c82-47b9-b813-056a3feb7ea2\") " pod="tigera-operator/tigera-operator-7d68577dc5-x4p6j" Jan 29 16:26:29.263001 containerd[1516]: time="2025-01-29T16:26:29.262960664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5tpx,Uid:01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:29.286104 containerd[1516]: time="2025-01-29T16:26:29.285369677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:29.286104 containerd[1516]: time="2025-01-29T16:26:29.286061744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:29.286104 containerd[1516]: time="2025-01-29T16:26:29.286076151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:29.286280 containerd[1516]: time="2025-01-29T16:26:29.286169327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:29.312949 systemd[1]: Started cri-containerd-82a238d31ee272e8d8aa03b28246a3278b0bab3a6eb932b65841387ad9b3c320.scope - libcontainer container 82a238d31ee272e8d8aa03b28246a3278b0bab3a6eb932b65841387ad9b3c320. Jan 29 16:26:29.334629 containerd[1516]: time="2025-01-29T16:26:29.334574927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5tpx,Uid:01fc2fa7-647d-4fa3-bfa1-9dd0a90a9675,Namespace:kube-system,Attempt:0,} returns sandbox id \"82a238d31ee272e8d8aa03b28246a3278b0bab3a6eb932b65841387ad9b3c320\"" Jan 29 16:26:29.339652 containerd[1516]: time="2025-01-29T16:26:29.339453873Z" level=info msg="CreateContainer within sandbox \"82a238d31ee272e8d8aa03b28246a3278b0bab3a6eb932b65841387ad9b3c320\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:26:29.357062 containerd[1516]: time="2025-01-29T16:26:29.357009056Z" level=info msg="CreateContainer within sandbox \"82a238d31ee272e8d8aa03b28246a3278b0bab3a6eb932b65841387ad9b3c320\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60d82bcc92dc03715d102229e5eecee877c843628873a15a256ef08c8f1c08a6\"" Jan 29 16:26:29.357502 containerd[1516]: time="2025-01-29T16:26:29.357459557Z" level=info msg="StartContainer for \"60d82bcc92dc03715d102229e5eecee877c843628873a15a256ef08c8f1c08a6\"" Jan 29 16:26:29.384930 systemd[1]: Started cri-containerd-60d82bcc92dc03715d102229e5eecee877c843628873a15a256ef08c8f1c08a6.scope - libcontainer container 60d82bcc92dc03715d102229e5eecee877c843628873a15a256ef08c8f1c08a6. Jan 29 16:26:29.415517 containerd[1516]: time="2025-01-29T16:26:29.415466554Z" level=info msg="StartContainer for \"60d82bcc92dc03715d102229e5eecee877c843628873a15a256ef08c8f1c08a6\" returns successfully" Jan 29 16:26:29.459151 containerd[1516]: time="2025-01-29T16:26:29.459111542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-x4p6j,Uid:4e279ec4-8c82-47b9-b813-056a3feb7ea2,Namespace:tigera-operator,Attempt:0,}" Jan 29 16:26:29.485725 containerd[1516]: time="2025-01-29T16:26:29.484131686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:29.485725 containerd[1516]: time="2025-01-29T16:26:29.484204714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:29.485725 containerd[1516]: time="2025-01-29T16:26:29.484217017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:29.485725 containerd[1516]: time="2025-01-29T16:26:29.484302949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:29.500923 systemd[1]: Started cri-containerd-d552a530cc4d7b8b21d45f6ec02816ce6de5ce9ab5db03b415fb8864975d0fba.scope - libcontainer container d552a530cc4d7b8b21d45f6ec02816ce6de5ce9ab5db03b415fb8864975d0fba. Jan 29 16:26:29.544027 containerd[1516]: time="2025-01-29T16:26:29.543878721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-x4p6j,Uid:4e279ec4-8c82-47b9-b813-056a3feb7ea2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d552a530cc4d7b8b21d45f6ec02816ce6de5ce9ab5db03b415fb8864975d0fba\"" Jan 29 16:26:29.546350 containerd[1516]: time="2025-01-29T16:26:29.546314672Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 16:26:30.083547 systemd[1]: run-containerd-runc-k8s.io-82a238d31ee272e8d8aa03b28246a3278b0bab3a6eb932b65841387ad9b3c320-runc.kGnzHW.mount: Deactivated successfully. Jan 29 16:26:32.107389 kubelet[2658]: I0129 16:26:32.107325 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d5tpx" podStartSLOduration=4.107304801 podStartE2EDuration="4.107304801s" podCreationTimestamp="2025-01-29 16:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:30.364503894 +0000 UTC m=+8.116022026" watchObservedRunningTime="2025-01-29 16:26:32.107304801 +0000 UTC m=+9.858822933" Jan 29 16:26:34.896829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158773105.mount: Deactivated successfully. Jan 29 16:26:35.187248 containerd[1516]: time="2025-01-29T16:26:35.187081098Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:35.187882 containerd[1516]: time="2025-01-29T16:26:35.187810533Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 16:26:35.189053 containerd[1516]: time="2025-01-29T16:26:35.189017047Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:35.191335 containerd[1516]: time="2025-01-29T16:26:35.191293337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:35.191907 containerd[1516]: time="2025-01-29T16:26:35.191868620Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.645522449s" Jan 29 16:26:35.191907 containerd[1516]: time="2025-01-29T16:26:35.191901834Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 16:26:35.193933 containerd[1516]: time="2025-01-29T16:26:35.193898887Z" level=info msg="CreateContainer within sandbox \"d552a530cc4d7b8b21d45f6ec02816ce6de5ce9ab5db03b415fb8864975d0fba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 16:26:35.206222 containerd[1516]: time="2025-01-29T16:26:35.206168512Z" level=info msg="CreateContainer within sandbox \"d552a530cc4d7b8b21d45f6ec02816ce6de5ce9ab5db03b415fb8864975d0fba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e27a7cd39c52ba6822b5c9ce9dc383d82e58b6b9a889d81ffe927d67b7a83736\"" Jan 29 16:26:35.206636 containerd[1516]: time="2025-01-29T16:26:35.206588774Z" level=info msg="StartContainer for \"e27a7cd39c52ba6822b5c9ce9dc383d82e58b6b9a889d81ffe927d67b7a83736\"" Jan 29 16:26:35.237932 systemd[1]: Started cri-containerd-e27a7cd39c52ba6822b5c9ce9dc383d82e58b6b9a889d81ffe927d67b7a83736.scope - libcontainer container e27a7cd39c52ba6822b5c9ce9dc383d82e58b6b9a889d81ffe927d67b7a83736. Jan 29 16:26:35.263194 containerd[1516]: time="2025-01-29T16:26:35.263145783Z" level=info msg="StartContainer for \"e27a7cd39c52ba6822b5c9ce9dc383d82e58b6b9a889d81ffe927d67b7a83736\" returns successfully" Jan 29 16:26:38.650926 kubelet[2658]: I0129 16:26:38.650844 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-x4p6j" podStartSLOduration=4.003555842 podStartE2EDuration="9.650825429s" podCreationTimestamp="2025-01-29 16:26:29 +0000 UTC" firstStartedPulling="2025-01-29 16:26:29.545266773 +0000 UTC m=+7.296784905" lastFinishedPulling="2025-01-29 16:26:35.19253636 +0000 UTC m=+12.944054492" observedRunningTime="2025-01-29 16:26:35.371032823 +0000 UTC m=+13.122550955" watchObservedRunningTime="2025-01-29 16:26:38.650825429 +0000 UTC m=+16.402343561" Jan 29 16:26:38.662910 systemd[1]: Created slice kubepods-besteffort-pod833ad582_e8ea_4ae1_b82c_01d4fd69fa25.slice - libcontainer container kubepods-besteffort-pod833ad582_e8ea_4ae1_b82c_01d4fd69fa25.slice. Jan 29 16:26:38.728897 kubelet[2658]: I0129 16:26:38.728853 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/833ad582-e8ea-4ae1-b82c-01d4fd69fa25-typha-certs\") pod \"calico-typha-84c699b9bd-v5fth\" (UID: \"833ad582-e8ea-4ae1-b82c-01d4fd69fa25\") " pod="calico-system/calico-typha-84c699b9bd-v5fth" Jan 29 16:26:38.729082 kubelet[2658]: I0129 16:26:38.728906 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/833ad582-e8ea-4ae1-b82c-01d4fd69fa25-tigera-ca-bundle\") pod \"calico-typha-84c699b9bd-v5fth\" (UID: \"833ad582-e8ea-4ae1-b82c-01d4fd69fa25\") " pod="calico-system/calico-typha-84c699b9bd-v5fth" Jan 29 16:26:38.729082 kubelet[2658]: I0129 16:26:38.728928 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4vmn\" (UniqueName: \"kubernetes.io/projected/833ad582-e8ea-4ae1-b82c-01d4fd69fa25-kube-api-access-x4vmn\") pod \"calico-typha-84c699b9bd-v5fth\" (UID: \"833ad582-e8ea-4ae1-b82c-01d4fd69fa25\") " pod="calico-system/calico-typha-84c699b9bd-v5fth" Jan 29 16:26:38.878515 systemd[1]: Created slice kubepods-besteffort-pod011584a8_99c3_46e0_b1fe_2e3364a33ad2.slice - libcontainer container kubepods-besteffort-pod011584a8_99c3_46e0_b1fe_2e3364a33ad2.slice. Jan 29 16:26:38.902972 kubelet[2658]: E0129 16:26:38.902694 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:38.929541 kubelet[2658]: I0129 16:26:38.929464 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-cni-log-dir\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929541 kubelet[2658]: I0129 16:26:38.929511 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0-kubelet-dir\") pod \"csi-node-driver-fgvfx\" (UID: \"6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0\") " pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:38.929541 kubelet[2658]: I0129 16:26:38.929527 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-var-lib-calico\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929541 kubelet[2658]: I0129 16:26:38.929544 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-var-run-calico\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929837 kubelet[2658]: I0129 16:26:38.929557 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0-registration-dir\") pod \"csi-node-driver-fgvfx\" (UID: \"6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0\") " pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:38.929837 kubelet[2658]: I0129 16:26:38.929576 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hrqj\" (UniqueName: \"kubernetes.io/projected/6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0-kube-api-access-6hrqj\") pod \"csi-node-driver-fgvfx\" (UID: \"6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0\") " pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:38.929837 kubelet[2658]: I0129 16:26:38.929591 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-policysync\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929837 kubelet[2658]: I0129 16:26:38.929605 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011584a8-99c3-46e0-b1fe-2e3364a33ad2-tigera-ca-bundle\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929837 kubelet[2658]: I0129 16:26:38.929618 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0-socket-dir\") pod \"csi-node-driver-fgvfx\" (UID: \"6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0\") " pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:38.929955 kubelet[2658]: I0129 16:26:38.929632 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-xtables-lock\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929955 kubelet[2658]: I0129 16:26:38.929645 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/011584a8-99c3-46e0-b1fe-2e3364a33ad2-node-certs\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929955 kubelet[2658]: I0129 16:26:38.929662 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-cni-net-dir\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929955 kubelet[2658]: I0129 16:26:38.929675 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-lib-modules\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.929955 kubelet[2658]: I0129 16:26:38.929689 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5nz\" (UniqueName: \"kubernetes.io/projected/011584a8-99c3-46e0-b1fe-2e3364a33ad2-kube-api-access-hb5nz\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.930088 kubelet[2658]: I0129 16:26:38.929703 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0-varrun\") pod \"csi-node-driver-fgvfx\" (UID: \"6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0\") " pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:38.930088 kubelet[2658]: I0129 16:26:38.929716 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-cni-bin-dir\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.930088 kubelet[2658]: I0129 16:26:38.929730 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/011584a8-99c3-46e0-b1fe-2e3364a33ad2-flexvol-driver-host\") pod \"calico-node-tzfj6\" (UID: \"011584a8-99c3-46e0-b1fe-2e3364a33ad2\") " pod="calico-system/calico-node-tzfj6" Jan 29 16:26:38.965444 containerd[1516]: time="2025-01-29T16:26:38.965413291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c699b9bd-v5fth,Uid:833ad582-e8ea-4ae1-b82c-01d4fd69fa25,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:39.031660 kubelet[2658]: E0129 16:26:39.031567 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:39.031660 kubelet[2658]: W0129 16:26:39.031592 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:39.031660 kubelet[2658]: E0129 16:26:39.031612 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:39.032320 kubelet[2658]: E0129 16:26:39.032291 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:39.032320 kubelet[2658]: W0129 16:26:39.032306 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:39.032320 kubelet[2658]: E0129 16:26:39.032319 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:39.033716 kubelet[2658]: E0129 16:26:39.033700 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:39.033716 kubelet[2658]: W0129 16:26:39.033713 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:39.033815 kubelet[2658]: E0129 16:26:39.033723 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:39.179880 kubelet[2658]: E0129 16:26:39.179065 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:39.179880 kubelet[2658]: W0129 16:26:39.179090 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:39.179880 kubelet[2658]: E0129 16:26:39.179310 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:39.180958 kubelet[2658]: E0129 16:26:39.180940 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:39.181041 kubelet[2658]: W0129 16:26:39.181016 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:39.181124 kubelet[2658]: E0129 16:26:39.181109 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:39.403660 containerd[1516]: time="2025-01-29T16:26:39.403590963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:39.403824 containerd[1516]: time="2025-01-29T16:26:39.403643541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:39.403824 containerd[1516]: time="2025-01-29T16:26:39.403687996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:39.403824 containerd[1516]: time="2025-01-29T16:26:39.403783665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:39.423922 systemd[1]: Started cri-containerd-c8c603ad3ed6c82dd9e7923e281cac8634a274c30c2a6e4614e68a02b54f581b.scope - libcontainer container c8c603ad3ed6c82dd9e7923e281cac8634a274c30c2a6e4614e68a02b54f581b. Jan 29 16:26:39.459657 containerd[1516]: time="2025-01-29T16:26:39.459453757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c699b9bd-v5fth,Uid:833ad582-e8ea-4ae1-b82c-01d4fd69fa25,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8c603ad3ed6c82dd9e7923e281cac8634a274c30c2a6e4614e68a02b54f581b\"" Jan 29 16:26:39.461039 containerd[1516]: time="2025-01-29T16:26:39.461004626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 16:26:39.481032 containerd[1516]: time="2025-01-29T16:26:39.480990018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzfj6,Uid:011584a8-99c3-46e0-b1fe-2e3364a33ad2,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:39.753547 containerd[1516]: time="2025-01-29T16:26:39.753407514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:39.753547 containerd[1516]: time="2025-01-29T16:26:39.753511780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:39.753547 containerd[1516]: time="2025-01-29T16:26:39.753530876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:39.754347 containerd[1516]: time="2025-01-29T16:26:39.754256492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:39.774970 systemd[1]: Started cri-containerd-cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85.scope - libcontainer container cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85. Jan 29 16:26:39.802989 containerd[1516]: time="2025-01-29T16:26:39.802951301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzfj6,Uid:011584a8-99c3-46e0-b1fe-2e3364a33ad2,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85\"" Jan 29 16:26:41.332037 kubelet[2658]: E0129 16:26:41.331966 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:41.775058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602407145.mount: Deactivated successfully. Jan 29 16:26:42.530283 containerd[1516]: time="2025-01-29T16:26:42.530238891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:42.531307 containerd[1516]: time="2025-01-29T16:26:42.531142951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 16:26:42.532313 containerd[1516]: time="2025-01-29T16:26:42.532267798Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:42.534566 containerd[1516]: time="2025-01-29T16:26:42.534485790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:42.535401 containerd[1516]: time="2025-01-29T16:26:42.535358121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.074310665s" Jan 29 16:26:42.535449 containerd[1516]: time="2025-01-29T16:26:42.535404309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 16:26:42.536553 containerd[1516]: time="2025-01-29T16:26:42.536383370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:26:42.545592 containerd[1516]: time="2025-01-29T16:26:42.545316215Z" level=info msg="CreateContainer within sandbox \"c8c603ad3ed6c82dd9e7923e281cac8634a274c30c2a6e4614e68a02b54f581b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 16:26:42.561142 containerd[1516]: time="2025-01-29T16:26:42.561090461Z" level=info msg="CreateContainer within sandbox \"c8c603ad3ed6c82dd9e7923e281cac8634a274c30c2a6e4614e68a02b54f581b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"26093204bb3065c9f7b103edd8847be8dfee1b20816257e61144a6e23a3cc627\"" Jan 29 16:26:42.562371 containerd[1516]: time="2025-01-29T16:26:42.561826967Z" level=info msg="StartContainer for \"26093204bb3065c9f7b103edd8847be8dfee1b20816257e61144a6e23a3cc627\"" Jan 29 16:26:42.590035 systemd[1]: Started cri-containerd-26093204bb3065c9f7b103edd8847be8dfee1b20816257e61144a6e23a3cc627.scope - libcontainer container 26093204bb3065c9f7b103edd8847be8dfee1b20816257e61144a6e23a3cc627. Jan 29 16:26:42.636633 containerd[1516]: time="2025-01-29T16:26:42.636578304Z" level=info msg="StartContainer for \"26093204bb3065c9f7b103edd8847be8dfee1b20816257e61144a6e23a3cc627\" returns successfully" Jan 29 16:26:43.331597 kubelet[2658]: E0129 16:26:43.331540 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:43.440572 kubelet[2658]: E0129 16:26:43.440525 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.440572 kubelet[2658]: W0129 16:26:43.440556 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.440572 kubelet[2658]: E0129 16:26:43.440583 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.440834 kubelet[2658]: E0129 16:26:43.440813 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.440834 kubelet[2658]: W0129 16:26:43.440828 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.440910 kubelet[2658]: E0129 16:26:43.440839 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.441080 kubelet[2658]: E0129 16:26:43.441052 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.441080 kubelet[2658]: W0129 16:26:43.441068 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.441080 kubelet[2658]: E0129 16:26:43.441078 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.441439 kubelet[2658]: E0129 16:26:43.441421 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.441439 kubelet[2658]: W0129 16:26:43.441435 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.441508 kubelet[2658]: E0129 16:26:43.441444 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.441681 kubelet[2658]: E0129 16:26:43.441665 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.441681 kubelet[2658]: W0129 16:26:43.441679 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.441732 kubelet[2658]: E0129 16:26:43.441692 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.441926 kubelet[2658]: E0129 16:26:43.441903 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.442029 kubelet[2658]: W0129 16:26:43.441930 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.442029 kubelet[2658]: E0129 16:26:43.441950 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.442171 kubelet[2658]: E0129 16:26:43.442154 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.442198 kubelet[2658]: W0129 16:26:43.442176 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.442198 kubelet[2658]: E0129 16:26:43.442188 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.442394 kubelet[2658]: E0129 16:26:43.442380 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.442394 kubelet[2658]: W0129 16:26:43.442391 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.442443 kubelet[2658]: E0129 16:26:43.442400 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.442607 kubelet[2658]: E0129 16:26:43.442594 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.442607 kubelet[2658]: W0129 16:26:43.442603 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.442678 kubelet[2658]: E0129 16:26:43.442611 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.442899 kubelet[2658]: E0129 16:26:43.442859 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.442899 kubelet[2658]: W0129 16:26:43.442888 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.443091 kubelet[2658]: E0129 16:26:43.442917 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.443231 kubelet[2658]: E0129 16:26:43.443208 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.443231 kubelet[2658]: W0129 16:26:43.443220 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.443231 kubelet[2658]: E0129 16:26:43.443229 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.443442 kubelet[2658]: E0129 16:26:43.443428 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.443442 kubelet[2658]: W0129 16:26:43.443438 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.443509 kubelet[2658]: E0129 16:26:43.443446 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.443684 kubelet[2658]: E0129 16:26:43.443669 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.443684 kubelet[2658]: W0129 16:26:43.443680 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.443732 kubelet[2658]: E0129 16:26:43.443689 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.443906 kubelet[2658]: E0129 16:26:43.443891 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.443906 kubelet[2658]: W0129 16:26:43.443901 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.443979 kubelet[2658]: E0129 16:26:43.443909 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.444133 kubelet[2658]: E0129 16:26:43.444119 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.444133 kubelet[2658]: W0129 16:26:43.444129 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.444182 kubelet[2658]: E0129 16:26:43.444141 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.462002 kubelet[2658]: E0129 16:26:43.461979 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.462002 kubelet[2658]: W0129 16:26:43.461995 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.462002 kubelet[2658]: E0129 16:26:43.462008 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.462265 kubelet[2658]: E0129 16:26:43.462246 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.462265 kubelet[2658]: W0129 16:26:43.462263 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.462369 kubelet[2658]: E0129 16:26:43.462281 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.462721 kubelet[2658]: E0129 16:26:43.462697 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.462721 kubelet[2658]: W0129 16:26:43.462713 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.462807 kubelet[2658]: E0129 16:26:43.462733 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.463002 kubelet[2658]: E0129 16:26:43.462985 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.463002 kubelet[2658]: W0129 16:26:43.462997 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.463053 kubelet[2658]: E0129 16:26:43.463012 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.463221 kubelet[2658]: E0129 16:26:43.463209 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.463221 kubelet[2658]: W0129 16:26:43.463219 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.463280 kubelet[2658]: E0129 16:26:43.463233 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.463467 kubelet[2658]: E0129 16:26:43.463452 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.463497 kubelet[2658]: W0129 16:26:43.463466 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.463497 kubelet[2658]: E0129 16:26:43.463482 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.463710 kubelet[2658]: E0129 16:26:43.463695 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.463710 kubelet[2658]: W0129 16:26:43.463707 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.463765 kubelet[2658]: E0129 16:26:43.463723 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.463999 kubelet[2658]: E0129 16:26:43.463982 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.463999 kubelet[2658]: W0129 16:26:43.463994 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.464054 kubelet[2658]: E0129 16:26:43.464008 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.464210 kubelet[2658]: E0129 16:26:43.464197 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.464210 kubelet[2658]: W0129 16:26:43.464208 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.464257 kubelet[2658]: E0129 16:26:43.464221 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.464455 kubelet[2658]: E0129 16:26:43.464441 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.464455 kubelet[2658]: W0129 16:26:43.464453 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.464504 kubelet[2658]: E0129 16:26:43.464467 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.464801 kubelet[2658]: E0129 16:26:43.464775 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.464834 kubelet[2658]: W0129 16:26:43.464788 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.464834 kubelet[2658]: E0129 16:26:43.464822 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.465058 kubelet[2658]: E0129 16:26:43.465044 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.465080 kubelet[2658]: W0129 16:26:43.465056 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.465080 kubelet[2658]: E0129 16:26:43.465072 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.465301 kubelet[2658]: E0129 16:26:43.465289 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.465323 kubelet[2658]: W0129 16:26:43.465300 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.465323 kubelet[2658]: E0129 16:26:43.465316 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.465614 kubelet[2658]: E0129 16:26:43.465586 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.465614 kubelet[2658]: W0129 16:26:43.465605 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.465656 kubelet[2658]: E0129 16:26:43.465626 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.465944 kubelet[2658]: E0129 16:26:43.465921 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.465944 kubelet[2658]: W0129 16:26:43.465942 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.466004 kubelet[2658]: E0129 16:26:43.465960 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.466190 kubelet[2658]: E0129 16:26:43.466174 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.466190 kubelet[2658]: W0129 16:26:43.466186 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.466247 kubelet[2658]: E0129 16:26:43.466200 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.466427 kubelet[2658]: E0129 16:26:43.466414 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.466460 kubelet[2658]: W0129 16:26:43.466426 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.466460 kubelet[2658]: E0129 16:26:43.466444 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.466658 kubelet[2658]: E0129 16:26:43.466642 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:43.466658 kubelet[2658]: W0129 16:26:43.466653 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:43.466711 kubelet[2658]: E0129 16:26:43.466662 2658 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:43.871330 containerd[1516]: time="2025-01-29T16:26:43.871293165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:43.872123 containerd[1516]: time="2025-01-29T16:26:43.872086899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 16:26:43.874076 containerd[1516]: time="2025-01-29T16:26:43.874045363Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:43.876205 containerd[1516]: time="2025-01-29T16:26:43.876180067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:43.876706 containerd[1516]: time="2025-01-29T16:26:43.876674778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.340264627s" Jan 29 16:26:43.876756 containerd[1516]: time="2025-01-29T16:26:43.876707649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 16:26:43.878337 containerd[1516]: time="2025-01-29T16:26:43.878312017Z" level=info msg="CreateContainer within sandbox \"cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:26:43.892562 containerd[1516]: time="2025-01-29T16:26:43.892519560Z" level=info msg="CreateContainer within sandbox \"cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5\"" Jan 29 16:26:43.893005 containerd[1516]: time="2025-01-29T16:26:43.892985136Z" level=info msg="StartContainer for \"2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5\"" Jan 29 16:26:43.925953 systemd[1]: Started cri-containerd-2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5.scope - libcontainer container 2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5. Jan 29 16:26:43.953973 containerd[1516]: time="2025-01-29T16:26:43.953934275Z" level=info msg="StartContainer for \"2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5\" returns successfully" Jan 29 16:26:43.966757 systemd[1]: cri-containerd-2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5.scope: Deactivated successfully. Jan 29 16:26:43.992453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5-rootfs.mount: Deactivated successfully. Jan 29 16:26:44.384776 kubelet[2658]: I0129 16:26:44.384740 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:26:44.425098 kubelet[2658]: I0129 16:26:44.425032 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84c699b9bd-v5fth" podStartSLOduration=3.349436555 podStartE2EDuration="6.425015116s" podCreationTimestamp="2025-01-29 16:26:38 +0000 UTC" firstStartedPulling="2025-01-29 16:26:39.460606396 +0000 UTC m=+17.212124528" lastFinishedPulling="2025-01-29 16:26:42.536184957 +0000 UTC m=+20.287703089" observedRunningTime="2025-01-29 16:26:43.391950009 +0000 UTC m=+21.143468141" watchObservedRunningTime="2025-01-29 16:26:44.425015116 +0000 UTC m=+22.176533248" Jan 29 16:26:44.584528 containerd[1516]: time="2025-01-29T16:26:44.584459490Z" level=info msg="shim disconnected" id=2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5 namespace=k8s.io Jan 29 16:26:44.584528 containerd[1516]: time="2025-01-29T16:26:44.584526215Z" level=warning msg="cleaning up after shim disconnected" id=2614b445e51d48608c696b6b777c88ce166ca95e888b8d03d869ed47ef1c14a5 namespace=k8s.io Jan 29 16:26:44.584712 containerd[1516]: time="2025-01-29T16:26:44.584539580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:45.331874 kubelet[2658]: E0129 16:26:45.331786 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:45.387729 containerd[1516]: time="2025-01-29T16:26:45.387687710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:26:47.331487 kubelet[2658]: E0129 16:26:47.331436 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:49.331717 kubelet[2658]: E0129 16:26:49.331649 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:51.319208 containerd[1516]: time="2025-01-29T16:26:51.319138741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:51.322393 containerd[1516]: time="2025-01-29T16:26:51.322276466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 16:26:51.328334 containerd[1516]: time="2025-01-29T16:26:51.328169206Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:51.331730 kubelet[2658]: E0129 16:26:51.331630 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:51.333546 containerd[1516]: time="2025-01-29T16:26:51.332361062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:51.333546 containerd[1516]: time="2025-01-29T16:26:51.333275950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.945535822s" Jan 29 16:26:51.333546 containerd[1516]: time="2025-01-29T16:26:51.333335141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 16:26:51.336524 containerd[1516]: time="2025-01-29T16:26:51.336468719Z" level=info msg="CreateContainer within sandbox \"cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:26:51.357143 containerd[1516]: time="2025-01-29T16:26:51.357080772Z" level=info msg="CreateContainer within sandbox \"cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff\"" Jan 29 16:26:51.357672 containerd[1516]: time="2025-01-29T16:26:51.357645262Z" level=info msg="StartContainer for \"28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff\"" Jan 29 16:26:51.400113 systemd[1]: Started cri-containerd-28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff.scope - libcontainer container 28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff. Jan 29 16:26:51.433702 containerd[1516]: time="2025-01-29T16:26:51.433652735Z" level=info msg="StartContainer for \"28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff\" returns successfully" Jan 29 16:26:52.956220 containerd[1516]: time="2025-01-29T16:26:52.956171301Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:26:52.959126 systemd[1]: cri-containerd-28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff.scope: Deactivated successfully. Jan 29 16:26:52.959538 systemd[1]: cri-containerd-28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff.scope: Consumed 549ms CPU time, 155.2M memory peak, 8K read from disk, 151M written to disk. Jan 29 16:26:52.980673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff-rootfs.mount: Deactivated successfully. Jan 29 16:26:53.056311 kubelet[2658]: I0129 16:26:53.056273 2658 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 16:26:53.392406 systemd[1]: Created slice kubepods-besteffort-pod6d3d71b5_7b0e_4f54_a59d_f9ebb4a75dd0.slice - libcontainer container kubepods-besteffort-pod6d3d71b5_7b0e_4f54_a59d_f9ebb4a75dd0.slice. Jan 29 16:26:53.415955 containerd[1516]: time="2025-01-29T16:26:53.415909378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:53.439369 containerd[1516]: time="2025-01-29T16:26:53.439288981Z" level=info msg="shim disconnected" id=28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff namespace=k8s.io Jan 29 16:26:53.439369 containerd[1516]: time="2025-01-29T16:26:53.439347571Z" level=warning msg="cleaning up after shim disconnected" id=28b8577b2168be1b72ffd254a0eaaa6357055abd5e507c14e4680f1e7a28c7ff namespace=k8s.io Jan 29 16:26:53.439369 containerd[1516]: time="2025-01-29T16:26:53.439355516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:53.497140 systemd[1]: Started sshd@9-10.0.0.146:22-10.0.0.1:46982.service - OpenSSH per-connection server daemon (10.0.0.1:46982). Jan 29 16:26:53.521848 systemd[1]: Created slice kubepods-burstable-pod137b728f_72a7_4e26_ad10_b54fc9528d91.slice - libcontainer container kubepods-burstable-pod137b728f_72a7_4e26_ad10_b54fc9528d91.slice. Jan 29 16:26:53.534168 systemd[1]: Created slice kubepods-besteffort-poda373ce1d_d072_4edb_a73d_44d8bb96f265.slice - libcontainer container kubepods-besteffort-poda373ce1d_d072_4edb_a73d_44d8bb96f265.slice. Jan 29 16:26:53.542184 systemd[1]: Created slice kubepods-besteffort-podff9f28aa_8c77_44d4_a5fb_e8a76b9ac18e.slice - libcontainer container kubepods-besteffort-podff9f28aa_8c77_44d4_a5fb_e8a76b9ac18e.slice. Jan 29 16:26:53.551041 systemd[1]: Created slice kubepods-burstable-pod91170ca1_19cd_4c25_a591_9a7f6b7062b6.slice - libcontainer container kubepods-burstable-pod91170ca1_19cd_4c25_a591_9a7f6b7062b6.slice. Jan 29 16:26:53.558493 systemd[1]: Created slice kubepods-besteffort-pode6a6a4b6_0cc0_4539_9ece_d802ad97d93f.slice - libcontainer container kubepods-besteffort-pode6a6a4b6_0cc0_4539_9ece_d802ad97d93f.slice. Jan 29 16:26:53.574002 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 46982 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:53.576479 sshd-session[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:53.581183 containerd[1516]: time="2025-01-29T16:26:53.581120047Z" level=error msg="Failed to destroy network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.582652 containerd[1516]: time="2025-01-29T16:26:53.582440487Z" level=error msg="encountered an error cleaning up failed sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.582652 containerd[1516]: time="2025-01-29T16:26:53.582538772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.583357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e-shm.mount: Deactivated successfully. Jan 29 16:26:53.583698 kubelet[2658]: E0129 16:26:53.583651 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.583813 kubelet[2658]: E0129 16:26:53.583738 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:53.583813 kubelet[2658]: E0129 16:26:53.583765 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:53.584076 kubelet[2658]: E0129 16:26:53.583833 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:53.586751 systemd-logind[1493]: New session 10 of user core. Jan 29 16:26:53.591941 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:26:53.623757 kubelet[2658]: I0129 16:26:53.623716 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91170ca1-19cd-4c25-a591-9a7f6b7062b6-config-volume\") pod \"coredns-668d6bf9bc-pjjr9\" (UID: \"91170ca1-19cd-4c25-a591-9a7f6b7062b6\") " pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:53.623838 kubelet[2658]: I0129 16:26:53.623766 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6a6a4b6-0cc0-4539-9ece-d802ad97d93f-calico-apiserver-certs\") pod \"calico-apiserver-84ffc4856f-chfgm\" (UID: \"e6a6a4b6-0cc0-4539-9ece-d802ad97d93f\") " pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:53.623838 kubelet[2658]: I0129 16:26:53.623787 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvj7f\" (UniqueName: \"kubernetes.io/projected/137b728f-72a7-4e26-ad10-b54fc9528d91-kube-api-access-kvj7f\") pod \"coredns-668d6bf9bc-6ssm5\" (UID: \"137b728f-72a7-4e26-ad10-b54fc9528d91\") " pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:53.623838 kubelet[2658]: I0129 16:26:53.623819 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a373ce1d-d072-4edb-a73d-44d8bb96f265-tigera-ca-bundle\") pod \"calico-kube-controllers-55b88d6857-fnfkx\" (UID: \"a373ce1d-d072-4edb-a73d-44d8bb96f265\") " pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:53.623838 kubelet[2658]: I0129 16:26:53.623836 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7jhs\" (UniqueName: \"kubernetes.io/projected/e6a6a4b6-0cc0-4539-9ece-d802ad97d93f-kube-api-access-n7jhs\") pod \"calico-apiserver-84ffc4856f-chfgm\" (UID: \"e6a6a4b6-0cc0-4539-9ece-d802ad97d93f\") " pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:53.623950 kubelet[2658]: I0129 16:26:53.623854 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz4x9\" (UniqueName: \"kubernetes.io/projected/91170ca1-19cd-4c25-a591-9a7f6b7062b6-kube-api-access-hz4x9\") pod \"coredns-668d6bf9bc-pjjr9\" (UID: \"91170ca1-19cd-4c25-a591-9a7f6b7062b6\") " pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:53.623950 kubelet[2658]: I0129 16:26:53.623883 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b728f-72a7-4e26-ad10-b54fc9528d91-config-volume\") pod \"coredns-668d6bf9bc-6ssm5\" (UID: \"137b728f-72a7-4e26-ad10-b54fc9528d91\") " pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:53.623950 kubelet[2658]: I0129 16:26:53.623899 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e-calico-apiserver-certs\") pod \"calico-apiserver-84ffc4856f-b8jf8\" (UID: \"ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e\") " pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:53.623950 kubelet[2658]: I0129 16:26:53.623920 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqk5\" (UniqueName: \"kubernetes.io/projected/ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e-kube-api-access-2vqk5\") pod \"calico-apiserver-84ffc4856f-b8jf8\" (UID: \"ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e\") " pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:53.623950 kubelet[2658]: I0129 16:26:53.623942 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns9r5\" (UniqueName: \"kubernetes.io/projected/a373ce1d-d072-4edb-a73d-44d8bb96f265-kube-api-access-ns9r5\") pod \"calico-kube-controllers-55b88d6857-fnfkx\" (UID: \"a373ce1d-d072-4edb-a73d-44d8bb96f265\") " pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:53.703164 sshd[3410]: Connection closed by 10.0.0.1 port 46982 Jan 29 16:26:53.704603 sshd-session[3380]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:53.708637 systemd[1]: sshd@9-10.0.0.146:22-10.0.0.1:46982.service: Deactivated successfully. Jan 29 16:26:53.710995 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:26:53.711692 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:26:53.712533 systemd-logind[1493]: Removed session 10. Jan 29 16:26:53.840149 containerd[1516]: time="2025-01-29T16:26:53.840080092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:53.840313 containerd[1516]: time="2025-01-29T16:26:53.840092255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:53.848896 containerd[1516]: time="2025-01-29T16:26:53.848853071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:0,}" Jan 29 16:26:53.856351 containerd[1516]: time="2025-01-29T16:26:53.856319735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:53.861943 containerd[1516]: time="2025-01-29T16:26:53.861903713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:0,}" Jan 29 16:26:53.943182 containerd[1516]: time="2025-01-29T16:26:53.943126300Z" level=error msg="Failed to destroy network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.943597 containerd[1516]: time="2025-01-29T16:26:53.943562029Z" level=error msg="encountered an error cleaning up failed sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.943649 containerd[1516]: time="2025-01-29T16:26:53.943631369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.944045 kubelet[2658]: E0129 16:26:53.943994 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.944660 kubelet[2658]: E0129 16:26:53.944222 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:53.944660 kubelet[2658]: E0129 16:26:53.944295 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:53.944660 kubelet[2658]: E0129 16:26:53.944358 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" podUID="a373ce1d-d072-4edb-a73d-44d8bb96f265" Jan 29 16:26:53.962377 containerd[1516]: time="2025-01-29T16:26:53.962238568Z" level=error msg="Failed to destroy network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.963309 containerd[1516]: time="2025-01-29T16:26:53.963245179Z" level=error msg="encountered an error cleaning up failed sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.963469 containerd[1516]: time="2025-01-29T16:26:53.963447269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.963898 kubelet[2658]: E0129 16:26:53.963785 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.963971 kubelet[2658]: E0129 16:26:53.963905 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:53.963971 kubelet[2658]: E0129 16:26:53.963926 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:53.964031 kubelet[2658]: E0129 16:26:53.963974 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6ssm5" podUID="137b728f-72a7-4e26-ad10-b54fc9528d91" Jan 29 16:26:53.975100 containerd[1516]: time="2025-01-29T16:26:53.974951316Z" level=error msg="Failed to destroy network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.975717 containerd[1516]: time="2025-01-29T16:26:53.975470131Z" level=error msg="encountered an error cleaning up failed sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.975717 containerd[1516]: time="2025-01-29T16:26:53.975527088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.976053 kubelet[2658]: E0129 16:26:53.975984 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.976160 containerd[1516]: time="2025-01-29T16:26:53.975923903Z" level=error msg="Failed to destroy network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.976259 kubelet[2658]: E0129 16:26:53.976050 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:53.976259 kubelet[2658]: E0129 16:26:53.976070 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:53.976259 kubelet[2658]: E0129 16:26:53.976113 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" podUID="e6a6a4b6-0cc0-4539-9ece-d802ad97d93f" Jan 29 16:26:53.977223 kubelet[2658]: E0129 16:26:53.976678 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.977223 kubelet[2658]: E0129 16:26:53.976718 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:53.977223 kubelet[2658]: E0129 16:26:53.976739 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:53.977351 containerd[1516]: time="2025-01-29T16:26:53.976417450Z" level=error msg="encountered an error cleaning up failed sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.977351 containerd[1516]: time="2025-01-29T16:26:53.976497551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.977433 kubelet[2658]: E0129 16:26:53.976765 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" podUID="ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e" Jan 29 16:26:53.985828 containerd[1516]: time="2025-01-29T16:26:53.983861151Z" level=error msg="Failed to destroy network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.985828 containerd[1516]: time="2025-01-29T16:26:53.984424790Z" level=error msg="encountered an error cleaning up failed sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.985828 containerd[1516]: time="2025-01-29T16:26:53.984472790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.985975 kubelet[2658]: E0129 16:26:53.984631 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:53.985975 kubelet[2658]: E0129 16:26:53.984676 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:53.985975 kubelet[2658]: E0129 16:26:53.984709 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:53.986076 kubelet[2658]: E0129 16:26:53.984739 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pjjr9" podUID="91170ca1-19cd-4c25-a591-9a7f6b7062b6" Jan 29 16:26:53.996290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff-shm.mount: Deactivated successfully. Jan 29 16:26:54.407667 kubelet[2658]: I0129 16:26:54.407631 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e" Jan 29 16:26:54.408492 containerd[1516]: time="2025-01-29T16:26:54.408429189Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:26:54.408691 containerd[1516]: time="2025-01-29T16:26:54.408659021Z" level=info msg="Ensure that sandbox 33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e in task-service has been cleanup successfully" Jan 29 16:26:54.411252 systemd[1]: run-netns-cni\x2d8f08c4c0\x2d9f11\x2d8c25\x2d8b2b\x2d435899a62e04.mount: Deactivated successfully. Jan 29 16:26:54.411891 containerd[1516]: time="2025-01-29T16:26:54.411769232Z" level=info msg="TearDown network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" successfully" Jan 29 16:26:54.411891 containerd[1516]: time="2025-01-29T16:26:54.411820918Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" returns successfully" Jan 29 16:26:54.412310 kubelet[2658]: I0129 16:26:54.412237 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518" Jan 29 16:26:54.412718 containerd[1516]: time="2025-01-29T16:26:54.412672348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:1,}" Jan 29 16:26:54.413634 containerd[1516]: time="2025-01-29T16:26:54.413575243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:26:54.413812 containerd[1516]: time="2025-01-29T16:26:54.413588959Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:26:54.413993 containerd[1516]: time="2025-01-29T16:26:54.413949216Z" level=info msg="Ensure that sandbox 3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518 in task-service has been cleanup successfully" Jan 29 16:26:54.414239 containerd[1516]: time="2025-01-29T16:26:54.414154892Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:26:54.414239 containerd[1516]: time="2025-01-29T16:26:54.414166594Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:26:54.414857 kubelet[2658]: I0129 16:26:54.414603 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8" Jan 29 16:26:54.415091 containerd[1516]: time="2025-01-29T16:26:54.415069410Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:26:54.415653 containerd[1516]: time="2025-01-29T16:26:54.415277471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:1,}" Jan 29 16:26:54.416282 containerd[1516]: time="2025-01-29T16:26:54.415924786Z" level=info msg="Ensure that sandbox dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8 in task-service has been cleanup successfully" Jan 29 16:26:54.416349 kubelet[2658]: I0129 16:26:54.416033 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be" Jan 29 16:26:54.416495 systemd[1]: run-netns-cni\x2da29f45d8\x2d6ccc\x2d62b0\x2d65fa\x2dd20c1396233c.mount: Deactivated successfully. Jan 29 16:26:54.416705 containerd[1516]: time="2025-01-29T16:26:54.416600174Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:26:54.416762 containerd[1516]: time="2025-01-29T16:26:54.416751619Z" level=info msg="Ensure that sandbox 71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be in task-service has been cleanup successfully" Jan 29 16:26:54.417190 containerd[1516]: time="2025-01-29T16:26:54.416852048Z" level=info msg="TearDown network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" successfully" Jan 29 16:26:54.417190 containerd[1516]: time="2025-01-29T16:26:54.416871554Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" returns successfully" Jan 29 16:26:54.417541 containerd[1516]: time="2025-01-29T16:26:54.417516436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:1,}" Jan 29 16:26:54.418536 containerd[1516]: time="2025-01-29T16:26:54.418458985Z" level=info msg="TearDown network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" successfully" Jan 29 16:26:54.418536 containerd[1516]: time="2025-01-29T16:26:54.418477460Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" returns successfully" Jan 29 16:26:54.419188 containerd[1516]: time="2025-01-29T16:26:54.419095200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:1,}" Jan 29 16:26:54.419561 kubelet[2658]: I0129 16:26:54.419518 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b" Jan 29 16:26:54.419667 systemd[1]: run-netns-cni\x2d0549134f\x2d1562\x2d1955\x2d1099\x2de26e9ca4ea9f.mount: Deactivated successfully. Jan 29 16:26:54.419781 systemd[1]: run-netns-cni\x2d62258770\x2d30a1\x2d5edb\x2de4a1\x2df8ff361ba70c.mount: Deactivated successfully. Jan 29 16:26:54.422007 containerd[1516]: time="2025-01-29T16:26:54.421970260Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:26:54.422237 containerd[1516]: time="2025-01-29T16:26:54.422211323Z" level=info msg="Ensure that sandbox ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b in task-service has been cleanup successfully" Jan 29 16:26:54.423124 containerd[1516]: time="2025-01-29T16:26:54.422610823Z" level=info msg="TearDown network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" successfully" Jan 29 16:26:54.423124 containerd[1516]: time="2025-01-29T16:26:54.422623187Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" returns successfully" Jan 29 16:26:54.423232 kubelet[2658]: I0129 16:26:54.422999 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff" Jan 29 16:26:54.423435 containerd[1516]: time="2025-01-29T16:26:54.423408070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:1,}" Jan 29 16:26:54.423696 containerd[1516]: time="2025-01-29T16:26:54.423665304Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:26:54.423873 containerd[1516]: time="2025-01-29T16:26:54.423848628Z" level=info msg="Ensure that sandbox 189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff in task-service has been cleanup successfully" Jan 29 16:26:54.424788 containerd[1516]: time="2025-01-29T16:26:54.424025621Z" level=info msg="TearDown network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" successfully" Jan 29 16:26:54.424788 containerd[1516]: time="2025-01-29T16:26:54.424038194Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" returns successfully" Jan 29 16:26:54.425125 containerd[1516]: time="2025-01-29T16:26:54.425039063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:1,}" Jan 29 16:26:54.577151 containerd[1516]: time="2025-01-29T16:26:54.576994298Z" level=error msg="Failed to destroy network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.577591 containerd[1516]: time="2025-01-29T16:26:54.577565391Z" level=error msg="encountered an error cleaning up failed sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.579940 containerd[1516]: time="2025-01-29T16:26:54.579910595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.580272 kubelet[2658]: E0129 16:26:54.580236 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.580423 kubelet[2658]: E0129 16:26:54.580401 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:54.580522 kubelet[2658]: E0129 16:26:54.580481 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:54.581085 kubelet[2658]: E0129 16:26:54.580609 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6ssm5" podUID="137b728f-72a7-4e26-ad10-b54fc9528d91" Jan 29 16:26:54.595563 containerd[1516]: time="2025-01-29T16:26:54.595382422Z" level=error msg="Failed to destroy network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.596111 containerd[1516]: time="2025-01-29T16:26:54.596081896Z" level=error msg="encountered an error cleaning up failed sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.596327 containerd[1516]: time="2025-01-29T16:26:54.596225376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.596837 kubelet[2658]: E0129 16:26:54.596627 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.596837 kubelet[2658]: E0129 16:26:54.596704 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:54.596837 kubelet[2658]: E0129 16:26:54.596731 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:54.596982 kubelet[2658]: E0129 16:26:54.596783 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:54.599874 containerd[1516]: time="2025-01-29T16:26:54.599821480Z" level=error msg="Failed to destroy network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.601002 containerd[1516]: time="2025-01-29T16:26:54.600971820Z" level=error msg="encountered an error cleaning up failed sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.601930 containerd[1516]: time="2025-01-29T16:26:54.601901515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.602469 kubelet[2658]: E0129 16:26:54.602265 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.602469 kubelet[2658]: E0129 16:26:54.602340 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:54.602469 kubelet[2658]: E0129 16:26:54.602366 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:54.602600 kubelet[2658]: E0129 16:26:54.602421 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" podUID="a373ce1d-d072-4edb-a73d-44d8bb96f265" Jan 29 16:26:54.603833 containerd[1516]: time="2025-01-29T16:26:54.603778210Z" level=error msg="Failed to destroy network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.604281 containerd[1516]: time="2025-01-29T16:26:54.604254605Z" level=error msg="encountered an error cleaning up failed sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.604385 containerd[1516]: time="2025-01-29T16:26:54.604362928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.604666 kubelet[2658]: E0129 16:26:54.604639 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.604772 kubelet[2658]: E0129 16:26:54.604751 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:54.604939 containerd[1516]: time="2025-01-29T16:26:54.604913372Z" level=error msg="Failed to destroy network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.605036 kubelet[2658]: E0129 16:26:54.605007 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:54.605101 kubelet[2658]: E0129 16:26:54.605064 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" podUID="ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e" Jan 29 16:26:54.605470 containerd[1516]: time="2025-01-29T16:26:54.605439160Z" level=error msg="encountered an error cleaning up failed sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.605538 containerd[1516]: time="2025-01-29T16:26:54.605486899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.605651 kubelet[2658]: E0129 16:26:54.605629 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.605718 kubelet[2658]: E0129 16:26:54.605663 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:54.605718 kubelet[2658]: E0129 16:26:54.605693 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:54.605811 kubelet[2658]: E0129 16:26:54.605725 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" podUID="e6a6a4b6-0cc0-4539-9ece-d802ad97d93f" Jan 29 16:26:54.610471 containerd[1516]: time="2025-01-29T16:26:54.609096118Z" level=error msg="Failed to destroy network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.610471 containerd[1516]: time="2025-01-29T16:26:54.609530704Z" level=error msg="encountered an error cleaning up failed sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.610471 containerd[1516]: time="2025-01-29T16:26:54.609586268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.610892 kubelet[2658]: E0129 16:26:54.610849 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:54.610973 kubelet[2658]: E0129 16:26:54.610928 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:54.610973 kubelet[2658]: E0129 16:26:54.610955 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:54.611044 kubelet[2658]: E0129 16:26:54.611011 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pjjr9" podUID="91170ca1-19cd-4c25-a591-9a7f6b7062b6" Jan 29 16:26:54.982611 systemd[1]: run-netns-cni\x2d01f1aeea\x2d1014\x2d01a2\x2ddbb1\x2dfbedf336b954.mount: Deactivated successfully. Jan 29 16:26:54.982730 systemd[1]: run-netns-cni\x2ddfb2ccd7\x2d8352\x2da567\x2d08dd\x2d160a2e735629.mount: Deactivated successfully. Jan 29 16:26:55.425862 kubelet[2658]: I0129 16:26:55.425828 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50" Jan 29 16:26:55.426379 containerd[1516]: time="2025-01-29T16:26:55.426348884Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:26:55.426620 containerd[1516]: time="2025-01-29T16:26:55.426549531Z" level=info msg="Ensure that sandbox b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50 in task-service has been cleanup successfully" Jan 29 16:26:55.428769 systemd[1]: run-netns-cni\x2d233cad91\x2d6dc6\x2db191\x2d3c21\x2def453662beec.mount: Deactivated successfully. Jan 29 16:26:55.429051 containerd[1516]: time="2025-01-29T16:26:55.428832278Z" level=info msg="TearDown network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" successfully" Jan 29 16:26:55.429051 containerd[1516]: time="2025-01-29T16:26:55.428849129Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" returns successfully" Jan 29 16:26:55.429765 containerd[1516]: time="2025-01-29T16:26:55.429718902Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:26:55.429970 containerd[1516]: time="2025-01-29T16:26:55.429872892Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:26:55.429970 containerd[1516]: time="2025-01-29T16:26:55.429925561Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:26:55.430335 kubelet[2658]: I0129 16:26:55.430189 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4" Jan 29 16:26:55.430406 containerd[1516]: time="2025-01-29T16:26:55.430382279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:2,}" Jan 29 16:26:55.430619 containerd[1516]: time="2025-01-29T16:26:55.430596891Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" Jan 29 16:26:55.430970 containerd[1516]: time="2025-01-29T16:26:55.430910590Z" level=info msg="Ensure that sandbox f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4 in task-service has been cleanup successfully" Jan 29 16:26:55.431356 containerd[1516]: time="2025-01-29T16:26:55.431236963Z" level=info msg="TearDown network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" successfully" Jan 29 16:26:55.431356 containerd[1516]: time="2025-01-29T16:26:55.431254927Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" returns successfully" Jan 29 16:26:55.432467 kubelet[2658]: I0129 16:26:55.431717 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f" Jan 29 16:26:55.432595 containerd[1516]: time="2025-01-29T16:26:55.432280393Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:26:55.432595 containerd[1516]: time="2025-01-29T16:26:55.432355875Z" level=info msg="TearDown network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" successfully" Jan 29 16:26:55.432595 containerd[1516]: time="2025-01-29T16:26:55.432380621Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" returns successfully" Jan 29 16:26:55.432832 containerd[1516]: time="2025-01-29T16:26:55.432809076Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" Jan 29 16:26:55.433141 containerd[1516]: time="2025-01-29T16:26:55.433050229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:2,}" Jan 29 16:26:55.433930 containerd[1516]: time="2025-01-29T16:26:55.433428589Z" level=info msg="Ensure that sandbox fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f in task-service has been cleanup successfully" Jan 29 16:26:55.433930 containerd[1516]: time="2025-01-29T16:26:55.433593739Z" level=info msg="TearDown network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" successfully" Jan 29 16:26:55.433930 containerd[1516]: time="2025-01-29T16:26:55.433606593Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" returns successfully" Jan 29 16:26:55.434027 kubelet[2658]: I0129 16:26:55.433622 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391" Jan 29 16:26:55.434324 containerd[1516]: time="2025-01-29T16:26:55.434296439Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:26:55.434401 containerd[1516]: time="2025-01-29T16:26:55.434378493Z" level=info msg="TearDown network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" successfully" Jan 29 16:26:55.434765 containerd[1516]: time="2025-01-29T16:26:55.434565915Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" returns successfully" Jan 29 16:26:55.434765 containerd[1516]: time="2025-01-29T16:26:55.434534827Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" Jan 29 16:26:55.434765 containerd[1516]: time="2025-01-29T16:26:55.434743378Z" level=info msg="Ensure that sandbox 1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391 in task-service has been cleanup successfully" Jan 29 16:26:55.435356 containerd[1516]: time="2025-01-29T16:26:55.434909279Z" level=info msg="TearDown network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" successfully" Jan 29 16:26:55.435356 containerd[1516]: time="2025-01-29T16:26:55.434925300Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" returns successfully" Jan 29 16:26:55.435420 containerd[1516]: time="2025-01-29T16:26:55.435361439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:2,}" Jan 29 16:26:55.435423 systemd[1]: run-netns-cni\x2d5ff96e24\x2d0ae5\x2d8503\x2dbdaf\x2db02945324af3.mount: Deactivated successfully. Jan 29 16:26:55.435752 containerd[1516]: time="2025-01-29T16:26:55.435648197Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:26:55.435752 containerd[1516]: time="2025-01-29T16:26:55.435741242Z" level=info msg="TearDown network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" successfully" Jan 29 16:26:55.435752 containerd[1516]: time="2025-01-29T16:26:55.435750179Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" returns successfully" Jan 29 16:26:55.437638 containerd[1516]: time="2025-01-29T16:26:55.436265406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:2,}" Jan 29 16:26:55.438303 kubelet[2658]: I0129 16:26:55.438259 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81" Jan 29 16:26:55.438648 containerd[1516]: time="2025-01-29T16:26:55.438621531Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" Jan 29 16:26:55.438868 containerd[1516]: time="2025-01-29T16:26:55.438788294Z" level=info msg="Ensure that sandbox 2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81 in task-service has been cleanup successfully" Jan 29 16:26:55.439134 containerd[1516]: time="2025-01-29T16:26:55.439109317Z" level=info msg="TearDown network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" successfully" Jan 29 16:26:55.439134 containerd[1516]: time="2025-01-29T16:26:55.439128964Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" returns successfully" Jan 29 16:26:55.439305 systemd[1]: run-netns-cni\x2d908ae3bf\x2df29f\x2d9e28\x2dfba7\x2d5242de98bed4.mount: Deactivated successfully. Jan 29 16:26:55.439473 containerd[1516]: time="2025-01-29T16:26:55.439447422Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:26:55.439564 containerd[1516]: time="2025-01-29T16:26:55.439550505Z" level=info msg="TearDown network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" successfully" Jan 29 16:26:55.439587 containerd[1516]: time="2025-01-29T16:26:55.439564251Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" returns successfully" Jan 29 16:26:55.439671 systemd[1]: run-netns-cni\x2d59850f9c\x2def0d\x2db15c\x2d0bb2\x2d64a57e6e4c80.mount: Deactivated successfully. Jan 29 16:26:55.440545 containerd[1516]: time="2025-01-29T16:26:55.440493457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:2,}" Jan 29 16:26:55.441636 kubelet[2658]: I0129 16:26:55.441204 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7" Jan 29 16:26:55.441848 containerd[1516]: time="2025-01-29T16:26:55.441777267Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" Jan 29 16:26:55.442030 containerd[1516]: time="2025-01-29T16:26:55.442009193Z" level=info msg="Ensure that sandbox 84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7 in task-service has been cleanup successfully" Jan 29 16:26:55.442272 containerd[1516]: time="2025-01-29T16:26:55.442208797Z" level=info msg="TearDown network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" successfully" Jan 29 16:26:55.442272 containerd[1516]: time="2025-01-29T16:26:55.442225188Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" returns successfully" Jan 29 16:26:55.442524 containerd[1516]: time="2025-01-29T16:26:55.442500906Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:26:55.442517 systemd[1]: run-netns-cni\x2d9a2861c8\x2d4ac8\x2d7688\x2db95f\x2d8a8590c14766.mount: Deactivated successfully. Jan 29 16:26:55.442674 containerd[1516]: time="2025-01-29T16:26:55.442569234Z" level=info msg="TearDown network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" successfully" Jan 29 16:26:55.442674 containerd[1516]: time="2025-01-29T16:26:55.442577810Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" returns successfully" Jan 29 16:26:55.442953 containerd[1516]: time="2025-01-29T16:26:55.442932356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:2,}" Jan 29 16:26:55.559602 containerd[1516]: time="2025-01-29T16:26:55.559516101Z" level=error msg="Failed to destroy network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.561019 containerd[1516]: time="2025-01-29T16:26:55.560892515Z" level=error msg="encountered an error cleaning up failed sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.561019 containerd[1516]: time="2025-01-29T16:26:55.560976433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.563146 kubelet[2658]: E0129 16:26:55.562286 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.563146 kubelet[2658]: E0129 16:26:55.562381 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:55.563146 kubelet[2658]: E0129 16:26:55.562411 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:55.563399 kubelet[2658]: E0129 16:26:55.563336 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" podUID="e6a6a4b6-0cc0-4539-9ece-d802ad97d93f" Jan 29 16:26:55.568119 containerd[1516]: time="2025-01-29T16:26:55.568069753Z" level=error msg="Failed to destroy network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.569528 containerd[1516]: time="2025-01-29T16:26:55.569492645Z" level=error msg="encountered an error cleaning up failed sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.569909 containerd[1516]: time="2025-01-29T16:26:55.569881405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.570880 kubelet[2658]: E0129 16:26:55.570802 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.571513 kubelet[2658]: E0129 16:26:55.570896 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:55.571513 kubelet[2658]: E0129 16:26:55.570926 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:55.571513 kubelet[2658]: E0129 16:26:55.571001 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:55.576016 containerd[1516]: time="2025-01-29T16:26:55.575971052Z" level=error msg="Failed to destroy network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.576669 containerd[1516]: time="2025-01-29T16:26:55.576634126Z" level=error msg="encountered an error cleaning up failed sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.576723 containerd[1516]: time="2025-01-29T16:26:55.576697837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.577033 kubelet[2658]: E0129 16:26:55.576985 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.577105 kubelet[2658]: E0129 16:26:55.577057 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:55.577105 kubelet[2658]: E0129 16:26:55.577081 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:55.577189 kubelet[2658]: E0129 16:26:55.577125 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" podUID="ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e" Jan 29 16:26:55.596638 containerd[1516]: time="2025-01-29T16:26:55.596472681Z" level=error msg="Failed to destroy network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.597123 containerd[1516]: time="2025-01-29T16:26:55.597093237Z" level=error msg="encountered an error cleaning up failed sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.597188 containerd[1516]: time="2025-01-29T16:26:55.597170803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.597558 kubelet[2658]: E0129 16:26:55.597510 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.597630 kubelet[2658]: E0129 16:26:55.597590 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:55.597630 kubelet[2658]: E0129 16:26:55.597620 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:55.597851 kubelet[2658]: E0129 16:26:55.597699 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6ssm5" podUID="137b728f-72a7-4e26-ad10-b54fc9528d91" Jan 29 16:26:55.601283 containerd[1516]: time="2025-01-29T16:26:55.601146659Z" level=error msg="Failed to destroy network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.601569 containerd[1516]: time="2025-01-29T16:26:55.601527404Z" level=error msg="encountered an error cleaning up failed sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.601627 containerd[1516]: time="2025-01-29T16:26:55.601604869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.601835 kubelet[2658]: E0129 16:26:55.601805 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.601872 kubelet[2658]: E0129 16:26:55.601852 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:55.601910 kubelet[2658]: E0129 16:26:55.601869 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:55.601933 kubelet[2658]: E0129 16:26:55.601908 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" podUID="a373ce1d-d072-4edb-a73d-44d8bb96f265" Jan 29 16:26:55.605767 containerd[1516]: time="2025-01-29T16:26:55.605740666Z" level=error msg="Failed to destroy network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.606079 containerd[1516]: time="2025-01-29T16:26:55.606054275Z" level=error msg="encountered an error cleaning up failed sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.606117 containerd[1516]: time="2025-01-29T16:26:55.606097246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.606248 kubelet[2658]: E0129 16:26:55.606223 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:55.606301 kubelet[2658]: E0129 16:26:55.606258 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:55.606301 kubelet[2658]: E0129 16:26:55.606276 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:55.606368 kubelet[2658]: E0129 16:26:55.606308 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pjjr9" podUID="91170ca1-19cd-4c25-a591-9a7f6b7062b6" Jan 29 16:26:55.983329 systemd[1]: run-netns-cni\x2d030ac9f1\x2d080b\x2d4a94\x2d055f\x2d6612e1c7d4d6.mount: Deactivated successfully. Jan 29 16:26:56.728418 kubelet[2658]: I0129 16:26:56.728381 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3" Jan 29 16:26:56.728418 kubelet[2658]: I0129 16:26:56.728413 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf" Jan 29 16:26:56.735430 kubelet[2658]: I0129 16:26:56.735404 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a" Jan 29 16:26:56.736731 kubelet[2658]: I0129 16:26:56.736713 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7" Jan 29 16:26:56.738125 kubelet[2658]: I0129 16:26:56.738108 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3" Jan 29 16:26:56.739256 kubelet[2658]: I0129 16:26:56.739234 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38" Jan 29 16:26:56.790418 containerd[1516]: time="2025-01-29T16:26:56.789585037Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\"" Jan 29 16:26:56.790418 containerd[1516]: time="2025-01-29T16:26:56.789684214Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\"" Jan 29 16:26:56.790418 containerd[1516]: time="2025-01-29T16:26:56.789873789Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\"" Jan 29 16:26:56.790418 containerd[1516]: time="2025-01-29T16:26:56.789894939Z" level=info msg="Ensure that sandbox b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3 in task-service has been cleanup successfully" Jan 29 16:26:56.790418 containerd[1516]: time="2025-01-29T16:26:56.790013140Z" level=info msg="Ensure that sandbox 254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3 in task-service has been cleanup successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.790615873Z" level=info msg="TearDown network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.790642272Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" returns successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.790746738Z" level=info msg="Ensure that sandbox b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a in task-service has been cleanup successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.790908271Z" level=info msg="TearDown network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.790923099Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" returns successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.791047393Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\"" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.791187516Z" level=info msg="Ensure that sandbox 84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf in task-service has been cleanup successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.791442365Z" level=info msg="TearDown network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.791454598Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" returns successfully" Jan 29 16:26:56.791566 containerd[1516]: time="2025-01-29T16:26:56.791522505Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" Jan 29 16:26:56.791925 containerd[1516]: time="2025-01-29T16:26:56.791689959Z" level=info msg="Ensure that sandbox ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38 in task-service has been cleanup successfully" Jan 29 16:26:56.792143 containerd[1516]: time="2025-01-29T16:26:56.791945870Z" level=info msg="TearDown network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" successfully" Jan 29 16:26:56.792143 containerd[1516]: time="2025-01-29T16:26:56.791975025Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" returns successfully" Jan 29 16:26:56.792143 containerd[1516]: time="2025-01-29T16:26:56.792042000Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\"" Jan 29 16:26:56.792269 containerd[1516]: time="2025-01-29T16:26:56.792242968Z" level=info msg="TearDown network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" successfully" Jan 29 16:26:56.792269 containerd[1516]: time="2025-01-29T16:26:56.792266482Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" returns successfully" Jan 29 16:26:56.792418 containerd[1516]: time="2025-01-29T16:26:56.792398720Z" level=info msg="Ensure that sandbox 95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7 in task-service has been cleanup successfully" Jan 29 16:26:56.792641 containerd[1516]: time="2025-01-29T16:26:56.792614326Z" level=info msg="TearDown network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" successfully" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792699095Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" returns successfully" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792403309Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792849737Z" level=info msg="TearDown network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" successfully" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792860738Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" returns successfully" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792428085Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792965985Z" level=info msg="TearDown network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" successfully" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792983899Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" returns successfully" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.792439917Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.793080000Z" level=info msg="TearDown network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" successfully" Jan 29 16:26:56.793138 containerd[1516]: time="2025-01-29T16:26:56.793088325Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" returns successfully" Jan 29 16:26:56.794127 containerd[1516]: time="2025-01-29T16:26:56.794107920Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794264253Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794481331Z" level=info msg="TearDown network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" successfully" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794516086Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" returns successfully" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794276967Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794611926Z" level=info msg="TearDown network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" successfully" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794631072Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" returns successfully" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794286896Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794715511Z" level=info msg="TearDown network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" successfully" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794724237Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" returns successfully" Jan 29 16:26:56.794758 containerd[1516]: time="2025-01-29T16:26:56.794301974Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:26:56.794505 systemd[1]: run-netns-cni\x2dc7a8ea2a\x2de3db\x2d12ea\x2d8cd8\x2dd0cd4c0931ab.mount: Deactivated successfully. Jan 29 16:26:56.795250 containerd[1516]: time="2025-01-29T16:26:56.794896881Z" level=info msg="TearDown network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" successfully" Jan 29 16:26:56.795250 containerd[1516]: time="2025-01-29T16:26:56.794938780Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" returns successfully" Jan 29 16:26:56.795250 containerd[1516]: time="2025-01-29T16:26:56.794315891Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:26:56.795250 containerd[1516]: time="2025-01-29T16:26:56.795065788Z" level=info msg="TearDown network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" successfully" Jan 29 16:26:56.795250 containerd[1516]: time="2025-01-29T16:26:56.795077871Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" returns successfully" Jan 29 16:26:56.794612 systemd[1]: run-netns-cni\x2d3f0d9c17\x2d6e5d\x2d8738\x2db12a\x2d03082b2900f3.mount: Deactivated successfully. Jan 29 16:26:56.794715 systemd[1]: run-netns-cni\x2d4ccb8712\x2de395\x2da6a6\x2db529\x2d41e5e9496a3e.mount: Deactivated successfully. Jan 29 16:26:56.794824 systemd[1]: run-netns-cni\x2db9b2df54\x2dd225\x2d62fa\x2d7eab\x2d8109e7461f5f.mount: Deactivated successfully. Jan 29 16:26:56.794910 systemd[1]: run-netns-cni\x2d007c1020\x2dfbc5\x2d791f\x2d4b52\x2d5275d4c1c91b.mount: Deactivated successfully. Jan 29 16:26:56.794982 systemd[1]: run-netns-cni\x2d814b590d\x2d6f48\x2dbd5c\x2d1ec5\x2d9cc5bf7739c6.mount: Deactivated successfully. Jan 29 16:26:56.795728 containerd[1516]: time="2025-01-29T16:26:56.795642862Z" level=info msg="TearDown network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" successfully" Jan 29 16:26:56.795728 containerd[1516]: time="2025-01-29T16:26:56.795682376Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" returns successfully" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.795894193Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.795934950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:3,}" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.795958365Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.795967171Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.796008870Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.796063422Z" level=info msg="TearDown network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" successfully" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.796071497Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" returns successfully" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.796097576Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.796121811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:3,}" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.796150094Z" level=info msg="TearDown network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" successfully" Jan 29 16:26:56.796200 containerd[1516]: time="2025-01-29T16:26:56.796157538Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" returns successfully" Jan 29 16:26:56.797483 containerd[1516]: time="2025-01-29T16:26:56.797355257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:3,}" Jan 29 16:26:56.797598 containerd[1516]: time="2025-01-29T16:26:56.797567616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:3,}" Jan 29 16:26:56.798296 containerd[1516]: time="2025-01-29T16:26:56.798171851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:3,}" Jan 29 16:26:56.798655 containerd[1516]: time="2025-01-29T16:26:56.798349995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:3,}" Jan 29 16:26:56.912386 containerd[1516]: time="2025-01-29T16:26:56.912336594Z" level=error msg="Failed to destroy network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:56.913105 containerd[1516]: time="2025-01-29T16:26:56.913082524Z" level=error msg="encountered an error cleaning up failed sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:56.913177 containerd[1516]: time="2025-01-29T16:26:56.913136716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:56.913464 kubelet[2658]: E0129 16:26:56.913424 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:56.913525 kubelet[2658]: E0129 16:26:56.913495 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:56.913552 kubelet[2658]: E0129 16:26:56.913523 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:56.913605 kubelet[2658]: E0129 16:26:56.913577 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:56.990737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5-shm.mount: Deactivated successfully. Jan 29 16:26:57.080401 containerd[1516]: time="2025-01-29T16:26:57.076584059Z" level=error msg="Failed to destroy network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.079994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2-shm.mount: Deactivated successfully. Jan 29 16:26:57.081243 containerd[1516]: time="2025-01-29T16:26:57.081033112Z" level=error msg="encountered an error cleaning up failed sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.081243 containerd[1516]: time="2025-01-29T16:26:57.081131277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.081401 kubelet[2658]: E0129 16:26:57.081355 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.081494 kubelet[2658]: E0129 16:26:57.081424 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:57.081494 kubelet[2658]: E0129 16:26:57.081448 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:57.081568 kubelet[2658]: E0129 16:26:57.081490 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" podUID="a373ce1d-d072-4edb-a73d-44d8bb96f265" Jan 29 16:26:57.089915 containerd[1516]: time="2025-01-29T16:26:57.089553440Z" level=error msg="Failed to destroy network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.092859 containerd[1516]: time="2025-01-29T16:26:57.091745115Z" level=error msg="encountered an error cleaning up failed sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.092859 containerd[1516]: time="2025-01-29T16:26:57.091823101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.092651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212-shm.mount: Deactivated successfully. Jan 29 16:26:57.093141 kubelet[2658]: E0129 16:26:57.092228 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.093141 kubelet[2658]: E0129 16:26:57.092291 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:57.093141 kubelet[2658]: E0129 16:26:57.092313 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:57.094341 kubelet[2658]: E0129 16:26:57.092367 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" podUID="ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e" Jan 29 16:26:57.102696 containerd[1516]: time="2025-01-29T16:26:57.102525476Z" level=error msg="Failed to destroy network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.105224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5-shm.mount: Deactivated successfully. Jan 29 16:26:57.106318 containerd[1516]: time="2025-01-29T16:26:57.105943294Z" level=error msg="encountered an error cleaning up failed sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.106318 containerd[1516]: time="2025-01-29T16:26:57.106001313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.107238 kubelet[2658]: E0129 16:26:57.106507 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.107238 kubelet[2658]: E0129 16:26:57.106570 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:57.107238 kubelet[2658]: E0129 16:26:57.106591 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:57.107345 kubelet[2658]: E0129 16:26:57.106645 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" podUID="e6a6a4b6-0cc0-4539-9ece-d802ad97d93f" Jan 29 16:26:57.112084 containerd[1516]: time="2025-01-29T16:26:57.110814771Z" level=error msg="Failed to destroy network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.112084 containerd[1516]: time="2025-01-29T16:26:57.111498624Z" level=error msg="encountered an error cleaning up failed sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.112084 containerd[1516]: time="2025-01-29T16:26:57.111553236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.112206 kubelet[2658]: E0129 16:26:57.111779 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.112206 kubelet[2658]: E0129 16:26:57.111856 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:57.112206 kubelet[2658]: E0129 16:26:57.111875 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:57.112298 kubelet[2658]: E0129 16:26:57.111915 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6ssm5" podUID="137b728f-72a7-4e26-ad10-b54fc9528d91" Jan 29 16:26:57.114736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982-shm.mount: Deactivated successfully. Jan 29 16:26:57.120549 containerd[1516]: time="2025-01-29T16:26:57.120499584Z" level=error msg="Failed to destroy network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.121049 containerd[1516]: time="2025-01-29T16:26:57.120965769Z" level=error msg="encountered an error cleaning up failed sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.121049 containerd[1516]: time="2025-01-29T16:26:57.121031173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.121331 kubelet[2658]: E0129 16:26:57.121286 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.121390 kubelet[2658]: E0129 16:26:57.121360 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:57.121390 kubelet[2658]: E0129 16:26:57.121379 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:57.121468 kubelet[2658]: E0129 16:26:57.121441 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pjjr9" podUID="91170ca1-19cd-4c25-a591-9a7f6b7062b6" Jan 29 16:26:57.745622 kubelet[2658]: I0129 16:26:57.745572 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5" Jan 29 16:26:57.747109 containerd[1516]: time="2025-01-29T16:26:57.746843550Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\"" Jan 29 16:26:57.747185 containerd[1516]: time="2025-01-29T16:26:57.747130780Z" level=info msg="Ensure that sandbox 8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5 in task-service has been cleanup successfully" Jan 29 16:26:57.748820 containerd[1516]: time="2025-01-29T16:26:57.747463203Z" level=info msg="TearDown network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" successfully" Jan 29 16:26:57.748820 containerd[1516]: time="2025-01-29T16:26:57.747521162Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" returns successfully" Jan 29 16:26:57.748820 containerd[1516]: time="2025-01-29T16:26:57.747934017Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" Jan 29 16:26:57.748820 containerd[1516]: time="2025-01-29T16:26:57.748273104Z" level=info msg="TearDown network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" successfully" Jan 29 16:26:57.748820 containerd[1516]: time="2025-01-29T16:26:57.748290997Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" returns successfully" Jan 29 16:26:57.749026 containerd[1516]: time="2025-01-29T16:26:57.748853283Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:26:57.749062 containerd[1516]: time="2025-01-29T16:26:57.749025036Z" level=info msg="TearDown network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" successfully" Jan 29 16:26:57.749062 containerd[1516]: time="2025-01-29T16:26:57.749038040Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" returns successfully" Jan 29 16:26:57.750161 containerd[1516]: time="2025-01-29T16:26:57.750057885Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:26:57.750400 containerd[1516]: time="2025-01-29T16:26:57.750376774Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:26:57.750489 containerd[1516]: time="2025-01-29T16:26:57.750468346Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:26:57.751938 containerd[1516]: time="2025-01-29T16:26:57.751895375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:4,}" Jan 29 16:26:57.802581 kubelet[2658]: I0129 16:26:57.802531 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212" Jan 29 16:26:57.804627 containerd[1516]: time="2025-01-29T16:26:57.804097200Z" level=info msg="StopPodSandbox for \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\"" Jan 29 16:26:57.804627 containerd[1516]: time="2025-01-29T16:26:57.804419945Z" level=info msg="Ensure that sandbox 35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212 in task-service has been cleanup successfully" Jan 29 16:26:57.805191 containerd[1516]: time="2025-01-29T16:26:57.805156979Z" level=info msg="TearDown network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" successfully" Jan 29 16:26:57.805267 containerd[1516]: time="2025-01-29T16:26:57.805250074Z" level=info msg="StopPodSandbox for \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" returns successfully" Jan 29 16:26:57.808554 containerd[1516]: time="2025-01-29T16:26:57.808524512Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\"" Jan 29 16:26:57.809209 containerd[1516]: time="2025-01-29T16:26:57.809174222Z" level=info msg="TearDown network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" successfully" Jan 29 16:26:57.809352 containerd[1516]: time="2025-01-29T16:26:57.809330665Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" returns successfully" Jan 29 16:26:57.811047 containerd[1516]: time="2025-01-29T16:26:57.811003787Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" Jan 29 16:26:57.813037 containerd[1516]: time="2025-01-29T16:26:57.813009694Z" level=info msg="TearDown network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" successfully" Jan 29 16:26:57.813142 containerd[1516]: time="2025-01-29T16:26:57.813121243Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" returns successfully" Jan 29 16:26:57.814713 containerd[1516]: time="2025-01-29T16:26:57.814670952Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:26:57.814888 containerd[1516]: time="2025-01-29T16:26:57.814820183Z" level=info msg="TearDown network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" successfully" Jan 29 16:26:57.814888 containerd[1516]: time="2025-01-29T16:26:57.814870107Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" returns successfully" Jan 29 16:26:57.815695 containerd[1516]: time="2025-01-29T16:26:57.815538381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:4,}" Jan 29 16:26:57.816593 kubelet[2658]: I0129 16:26:57.816548 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5" Jan 29 16:26:57.817940 containerd[1516]: time="2025-01-29T16:26:57.817240747Z" level=info msg="StopPodSandbox for \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\"" Jan 29 16:26:57.817940 containerd[1516]: time="2025-01-29T16:26:57.817475388Z" level=info msg="Ensure that sandbox 46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5 in task-service has been cleanup successfully" Jan 29 16:26:57.820402 containerd[1516]: time="2025-01-29T16:26:57.820347190Z" level=info msg="TearDown network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" successfully" Jan 29 16:26:57.820512 containerd[1516]: time="2025-01-29T16:26:57.820488977Z" level=info msg="StopPodSandbox for \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" returns successfully" Jan 29 16:26:57.821162 containerd[1516]: time="2025-01-29T16:26:57.821131263Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\"" Jan 29 16:26:57.821737 containerd[1516]: time="2025-01-29T16:26:57.821680494Z" level=info msg="TearDown network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" successfully" Jan 29 16:26:57.821737 containerd[1516]: time="2025-01-29T16:26:57.821721460Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" returns successfully" Jan 29 16:26:57.821866 kubelet[2658]: I0129 16:26:57.821772 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe" Jan 29 16:26:57.822154 containerd[1516]: time="2025-01-29T16:26:57.822122844Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" Jan 29 16:26:57.822217 containerd[1516]: time="2025-01-29T16:26:57.822199458Z" level=info msg="TearDown network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" successfully" Jan 29 16:26:57.822217 containerd[1516]: time="2025-01-29T16:26:57.822209276Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" returns successfully" Jan 29 16:26:57.823680 containerd[1516]: time="2025-01-29T16:26:57.823639583Z" level=info msg="StopPodSandbox for \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\"" Jan 29 16:26:57.824068 containerd[1516]: time="2025-01-29T16:26:57.823722979Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:26:57.826560 containerd[1516]: time="2025-01-29T16:26:57.824132227Z" level=info msg="TearDown network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" successfully" Jan 29 16:26:57.826560 containerd[1516]: time="2025-01-29T16:26:57.824143749Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" returns successfully" Jan 29 16:26:57.826560 containerd[1516]: time="2025-01-29T16:26:57.825668301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:4,}" Jan 29 16:26:57.826729 containerd[1516]: time="2025-01-29T16:26:57.826693867Z" level=info msg="Ensure that sandbox 9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe in task-service has been cleanup successfully" Jan 29 16:26:57.828091 containerd[1516]: time="2025-01-29T16:26:57.828007033Z" level=info msg="TearDown network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" successfully" Jan 29 16:26:57.828091 containerd[1516]: time="2025-01-29T16:26:57.828035747Z" level=info msg="StopPodSandbox for \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" returns successfully" Jan 29 16:26:57.828877 kubelet[2658]: I0129 16:26:57.828842 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982" Jan 29 16:26:57.829584 containerd[1516]: time="2025-01-29T16:26:57.829543918Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\"" Jan 29 16:26:57.829754 containerd[1516]: time="2025-01-29T16:26:57.829674333Z" level=info msg="TearDown network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" successfully" Jan 29 16:26:57.829754 containerd[1516]: time="2025-01-29T16:26:57.829685945Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" returns successfully" Jan 29 16:26:57.829867 containerd[1516]: time="2025-01-29T16:26:57.829853470Z" level=info msg="StopPodSandbox for \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\"" Jan 29 16:26:57.830044 containerd[1516]: time="2025-01-29T16:26:57.830022036Z" level=info msg="Ensure that sandbox 93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982 in task-service has been cleanup successfully" Jan 29 16:26:57.830692 containerd[1516]: time="2025-01-29T16:26:57.830657368Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" Jan 29 16:26:57.830756 containerd[1516]: time="2025-01-29T16:26:57.830741898Z" level=info msg="TearDown network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" successfully" Jan 29 16:26:57.830756 containerd[1516]: time="2025-01-29T16:26:57.830751907Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" returns successfully" Jan 29 16:26:57.831415 containerd[1516]: time="2025-01-29T16:26:57.831261874Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:26:57.831415 containerd[1516]: time="2025-01-29T16:26:57.831347274Z" level=info msg="TearDown network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" successfully" Jan 29 16:26:57.831415 containerd[1516]: time="2025-01-29T16:26:57.831357153Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" returns successfully" Jan 29 16:26:57.833160 containerd[1516]: time="2025-01-29T16:26:57.832824247Z" level=info msg="TearDown network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" successfully" Jan 29 16:26:57.833160 containerd[1516]: time="2025-01-29T16:26:57.832845707Z" level=info msg="StopPodSandbox for \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" returns successfully" Jan 29 16:26:57.834046 containerd[1516]: time="2025-01-29T16:26:57.833920726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:4,}" Jan 29 16:26:57.836128 containerd[1516]: time="2025-01-29T16:26:57.836104577Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\"" Jan 29 16:26:57.837422 kubelet[2658]: I0129 16:26:57.837230 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2" Jan 29 16:26:57.837513 containerd[1516]: time="2025-01-29T16:26:57.836562837Z" level=info msg="TearDown network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" successfully" Jan 29 16:26:57.837759 containerd[1516]: time="2025-01-29T16:26:57.837578053Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" returns successfully" Jan 29 16:26:57.839365 containerd[1516]: time="2025-01-29T16:26:57.839239743Z" level=info msg="StopPodSandbox for \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\"" Jan 29 16:26:57.840048 containerd[1516]: time="2025-01-29T16:26:57.840007855Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" Jan 29 16:26:57.840239 containerd[1516]: time="2025-01-29T16:26:57.840146005Z" level=info msg="TearDown network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" successfully" Jan 29 16:26:57.840239 containerd[1516]: time="2025-01-29T16:26:57.840192612Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" returns successfully" Jan 29 16:26:57.840546 containerd[1516]: time="2025-01-29T16:26:57.840376798Z" level=info msg="Ensure that sandbox b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2 in task-service has been cleanup successfully" Jan 29 16:26:57.840880 containerd[1516]: time="2025-01-29T16:26:57.840861077Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:26:57.841092 containerd[1516]: time="2025-01-29T16:26:57.841031598Z" level=info msg="TearDown network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" successfully" Jan 29 16:26:57.841092 containerd[1516]: time="2025-01-29T16:26:57.841045965Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" returns successfully" Jan 29 16:26:57.841593 containerd[1516]: time="2025-01-29T16:26:57.841538089Z" level=info msg="TearDown network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" successfully" Jan 29 16:26:57.841767 containerd[1516]: time="2025-01-29T16:26:57.841752771Z" level=info msg="StopPodSandbox for \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" returns successfully" Jan 29 16:26:57.842111 containerd[1516]: time="2025-01-29T16:26:57.842068023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:4,}" Jan 29 16:26:57.842664 containerd[1516]: time="2025-01-29T16:26:57.842527757Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\"" Jan 29 16:26:57.842664 containerd[1516]: time="2025-01-29T16:26:57.842618116Z" level=info msg="TearDown network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" successfully" Jan 29 16:26:57.842664 containerd[1516]: time="2025-01-29T16:26:57.842627383Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" returns successfully" Jan 29 16:26:57.845431 containerd[1516]: time="2025-01-29T16:26:57.845406923Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" Jan 29 16:26:57.845611 containerd[1516]: time="2025-01-29T16:26:57.845561172Z" level=info msg="TearDown network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" successfully" Jan 29 16:26:57.845694 containerd[1516]: time="2025-01-29T16:26:57.845669195Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" returns successfully" Jan 29 16:26:57.847823 containerd[1516]: time="2025-01-29T16:26:57.847753228Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:26:57.848018 containerd[1516]: time="2025-01-29T16:26:57.847912026Z" level=info msg="TearDown network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" successfully" Jan 29 16:26:57.848018 containerd[1516]: time="2025-01-29T16:26:57.847927224Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" returns successfully" Jan 29 16:26:57.850007 containerd[1516]: time="2025-01-29T16:26:57.849968487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:4,}" Jan 29 16:26:57.869752 containerd[1516]: time="2025-01-29T16:26:57.869702660Z" level=error msg="Failed to destroy network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.870279 containerd[1516]: time="2025-01-29T16:26:57.870257132Z" level=error msg="encountered an error cleaning up failed sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.870395 containerd[1516]: time="2025-01-29T16:26:57.870376786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.870782 kubelet[2658]: E0129 16:26:57.870739 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:57.870975 kubelet[2658]: E0129 16:26:57.870938 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:57.871006 kubelet[2658]: E0129 16:26:57.870981 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:57.871076 kubelet[2658]: E0129 16:26:57.871046 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" podUID="e6a6a4b6-0cc0-4539-9ece-d802ad97d93f" Jan 29 16:26:57.982262 systemd[1]: run-netns-cni\x2d8f1e1563\x2db154\x2d744a\x2dc584\x2d263657f2d885.mount: Deactivated successfully. Jan 29 16:26:57.982516 systemd[1]: run-netns-cni\x2d35414835\x2d2e49\x2d7e12\x2d8257\x2d3010b18483e5.mount: Deactivated successfully. Jan 29 16:26:57.982588 systemd[1]: run-netns-cni\x2de44d577b\x2dc4f9\x2d8119\x2d4ecc\x2deffdec2502be.mount: Deactivated successfully. Jan 29 16:26:57.983001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe-shm.mount: Deactivated successfully. Jan 29 16:26:57.983099 systemd[1]: run-netns-cni\x2db3569773\x2dae57\x2d9c2e\x2d9f73\x2da9ab57f33116.mount: Deactivated successfully. Jan 29 16:26:57.983170 systemd[1]: run-netns-cni\x2d0f67926c\x2d3e2e\x2d5074\x2ddaa9\x2d494af408fa66.mount: Deactivated successfully. Jan 29 16:26:57.983241 systemd[1]: run-netns-cni\x2d7578a9f2\x2dc38e\x2d698b\x2daba7\x2dacff4c1c30b0.mount: Deactivated successfully. Jan 29 16:26:58.727313 systemd[1]: Started sshd@10-10.0.0.146:22-10.0.0.1:37644.service - OpenSSH per-connection server daemon (10.0.0.1:37644). Jan 29 16:26:58.842231 kubelet[2658]: I0129 16:26:58.842170 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23" Jan 29 16:26:58.842775 containerd[1516]: time="2025-01-29T16:26:58.842693853Z" level=info msg="StopPodSandbox for \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\"" Jan 29 16:26:58.845202 containerd[1516]: time="2025-01-29T16:26:58.843006500Z" level=info msg="Ensure that sandbox 1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23 in task-service has been cleanup successfully" Jan 29 16:26:58.845202 containerd[1516]: time="2025-01-29T16:26:58.843272339Z" level=info msg="TearDown network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" successfully" Jan 29 16:26:58.845202 containerd[1516]: time="2025-01-29T16:26:58.843289833Z" level=info msg="StopPodSandbox for \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" returns successfully" Jan 29 16:26:58.845202 containerd[1516]: time="2025-01-29T16:26:58.843613019Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\"" Jan 29 16:26:58.845202 containerd[1516]: time="2025-01-29T16:26:58.843712976Z" level=info msg="TearDown network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" successfully" Jan 29 16:26:58.845202 containerd[1516]: time="2025-01-29T16:26:58.843726151Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" returns successfully" Jan 29 16:26:58.845654 systemd[1]: run-netns-cni\x2d281cd575\x2d6759\x2d45cb\x2d7281\x2dfd361bbf2308.mount: Deactivated successfully. Jan 29 16:26:58.846984 containerd[1516]: time="2025-01-29T16:26:58.846948761Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" Jan 29 16:26:58.847086 containerd[1516]: time="2025-01-29T16:26:58.847066081Z" level=info msg="TearDown network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" successfully" Jan 29 16:26:58.847114 containerd[1516]: time="2025-01-29T16:26:58.847081611Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" returns successfully" Jan 29 16:26:58.847452 containerd[1516]: time="2025-01-29T16:26:58.847415838Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:26:58.847590 containerd[1516]: time="2025-01-29T16:26:58.847530543Z" level=info msg="TearDown network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" successfully" Jan 29 16:26:58.847633 containerd[1516]: time="2025-01-29T16:26:58.847588213Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" returns successfully" Jan 29 16:26:58.847960 containerd[1516]: time="2025-01-29T16:26:58.847843241Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:26:58.847960 containerd[1516]: time="2025-01-29T16:26:58.847925405Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:26:58.847960 containerd[1516]: time="2025-01-29T16:26:58.847937037Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:26:58.848558 containerd[1516]: time="2025-01-29T16:26:58.848524781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:5,}" Jan 29 16:26:58.935342 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 37644 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:58.937436 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:58.943139 systemd-logind[1493]: New session 11 of user core. Jan 29 16:26:58.949007 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:26:59.205010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896730553.mount: Deactivated successfully. Jan 29 16:26:59.290716 sshd[4322]: Connection closed by 10.0.0.1 port 37644 Jan 29 16:26:59.291516 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:59.297085 systemd[1]: sshd@10-10.0.0.146:22-10.0.0.1:37644.service: Deactivated successfully. Jan 29 16:26:59.299768 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:26:59.300831 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:26:59.301971 systemd-logind[1493]: Removed session 11. Jan 29 16:26:59.435198 containerd[1516]: time="2025-01-29T16:26:59.431046176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:59.473051 containerd[1516]: time="2025-01-29T16:26:59.472914648Z" level=error msg="Failed to destroy network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.473435 containerd[1516]: time="2025-01-29T16:26:59.473396001Z" level=error msg="encountered an error cleaning up failed sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.473577 containerd[1516]: time="2025-01-29T16:26:59.473468437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.474080 kubelet[2658]: E0129 16:26:59.474027 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.474189 kubelet[2658]: E0129 16:26:59.474109 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:59.474189 kubelet[2658]: E0129 16:26:59.474134 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" Jan 29 16:26:59.474303 kubelet[2658]: E0129 16:26:59.474189 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-b8jf8_calico-apiserver(ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" podUID="ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e" Jan 29 16:26:59.478006 containerd[1516]: time="2025-01-29T16:26:59.477935834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 16:26:59.485175 containerd[1516]: time="2025-01-29T16:26:59.485124079Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:59.533015 containerd[1516]: time="2025-01-29T16:26:59.532947649Z" level=error msg="Failed to destroy network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.533675 containerd[1516]: time="2025-01-29T16:26:59.533643455Z" level=error msg="encountered an error cleaning up failed sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.533846 containerd[1516]: time="2025-01-29T16:26:59.533825407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.534155 kubelet[2658]: E0129 16:26:59.534116 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.534398 kubelet[2658]: E0129 16:26:59.534376 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:59.534480 kubelet[2658]: E0129 16:26:59.534460 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pjjr9" Jan 29 16:26:59.534630 kubelet[2658]: E0129 16:26:59.534599 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pjjr9_kube-system(91170ca1-19cd-4c25-a591-9a7f6b7062b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pjjr9" podUID="91170ca1-19cd-4c25-a591-9a7f6b7062b6" Jan 29 16:26:59.556348 containerd[1516]: time="2025-01-29T16:26:59.556289850Z" level=error msg="Failed to destroy network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.556719 containerd[1516]: time="2025-01-29T16:26:59.556690612Z" level=error msg="encountered an error cleaning up failed sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.557753 containerd[1516]: time="2025-01-29T16:26:59.556753310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.557753 containerd[1516]: time="2025-01-29T16:26:59.556850763Z" level=error msg="Failed to destroy network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.557753 containerd[1516]: time="2025-01-29T16:26:59.557157729Z" level=error msg="encountered an error cleaning up failed sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.557753 containerd[1516]: time="2025-01-29T16:26:59.557187615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.557957 kubelet[2658]: E0129 16:26:59.557392 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.557957 kubelet[2658]: E0129 16:26:59.557446 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:59.557957 kubelet[2658]: E0129 16:26:59.557468 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" Jan 29 16:26:59.558058 containerd[1516]: time="2025-01-29T16:26:59.557845882Z" level=error msg="Failed to destroy network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.558081 kubelet[2658]: E0129 16:26:59.557517 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ffc4856f-chfgm_calico-apiserver(e6a6a4b6-0cc0-4539-9ece-d802ad97d93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" podUID="e6a6a4b6-0cc0-4539-9ece-d802ad97d93f" Jan 29 16:26:59.558081 kubelet[2658]: E0129 16:26:59.557566 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.558081 kubelet[2658]: E0129 16:26:59.557583 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:59.558179 kubelet[2658]: E0129 16:26:59.557596 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" Jan 29 16:26:59.558179 kubelet[2658]: E0129 16:26:59.557615 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b88d6857-fnfkx_calico-system(a373ce1d-d072-4edb-a73d-44d8bb96f265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" podUID="a373ce1d-d072-4edb-a73d-44d8bb96f265" Jan 29 16:26:59.558247 containerd[1516]: time="2025-01-29T16:26:59.558143390Z" level=error msg="encountered an error cleaning up failed sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.558247 containerd[1516]: time="2025-01-29T16:26:59.558180349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.558366 kubelet[2658]: E0129 16:26:59.558328 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.558403 kubelet[2658]: E0129 16:26:59.558389 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:59.558428 kubelet[2658]: E0129 16:26:59.558402 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ssm5" Jan 29 16:26:59.558456 kubelet[2658]: E0129 16:26:59.558441 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6ssm5_kube-system(137b728f-72a7-4e26-ad10-b54fc9528d91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6ssm5" podUID="137b728f-72a7-4e26-ad10-b54fc9528d91" Jan 29 16:26:59.563276 containerd[1516]: time="2025-01-29T16:26:59.563251850Z" level=error msg="Failed to destroy network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.563616 containerd[1516]: time="2025-01-29T16:26:59.563593812Z" level=error msg="encountered an error cleaning up failed sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.563666 containerd[1516]: time="2025-01-29T16:26:59.563630411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.563863 kubelet[2658]: E0129 16:26:59.563822 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:59.563910 kubelet[2658]: E0129 16:26:59.563892 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:59.563933 kubelet[2658]: E0129 16:26:59.563917 2658 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvfx" Jan 29 16:26:59.563990 kubelet[2658]: E0129 16:26:59.563964 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgvfx_calico-system(6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgvfx" podUID="6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0" Jan 29 16:26:59.574018 containerd[1516]: time="2025-01-29T16:26:59.573988066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:59.574929 containerd[1516]: time="2025-01-29T16:26:59.574849372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.161239093s" Jan 29 16:26:59.574929 containerd[1516]: time="2025-01-29T16:26:59.574900378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 16:26:59.583911 containerd[1516]: time="2025-01-29T16:26:59.583865048Z" level=info msg="CreateContainer within sandbox \"cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:26:59.604960 containerd[1516]: time="2025-01-29T16:26:59.604914996Z" level=info msg="CreateContainer within sandbox \"cb4824c2e1b0f0d2ace6f510b3fe31e7fa6f9817b76ae8742e7f16c1a58f0f85\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3c4c3e8702db5684e4fda99752d8ad6ba5356183c6af62682c4beccf096c353\"" Jan 29 16:26:59.605784 containerd[1516]: time="2025-01-29T16:26:59.605460360Z" level=info msg="StartContainer for \"a3c4c3e8702db5684e4fda99752d8ad6ba5356183c6af62682c4beccf096c353\"" Jan 29 16:26:59.689091 systemd[1]: Started cri-containerd-a3c4c3e8702db5684e4fda99752d8ad6ba5356183c6af62682c4beccf096c353.scope - libcontainer container a3c4c3e8702db5684e4fda99752d8ad6ba5356183c6af62682c4beccf096c353. Jan 29 16:26:59.725112 containerd[1516]: time="2025-01-29T16:26:59.724989254Z" level=info msg="StartContainer for \"a3c4c3e8702db5684e4fda99752d8ad6ba5356183c6af62682c4beccf096c353\" returns successfully" Jan 29 16:26:59.791987 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:26:59.792775 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 16:26:59.846746 kubelet[2658]: I0129 16:26:59.846683 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b" Jan 29 16:26:59.847718 containerd[1516]: time="2025-01-29T16:26:59.847494799Z" level=info msg="StopPodSandbox for \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\"" Jan 29 16:26:59.848286 containerd[1516]: time="2025-01-29T16:26:59.847765867Z" level=info msg="Ensure that sandbox b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b in task-service has been cleanup successfully" Jan 29 16:26:59.848286 containerd[1516]: time="2025-01-29T16:26:59.847950373Z" level=info msg="TearDown network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\" successfully" Jan 29 16:26:59.848286 containerd[1516]: time="2025-01-29T16:26:59.847961614Z" level=info msg="StopPodSandbox for \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\" returns successfully" Jan 29 16:26:59.848286 containerd[1516]: time="2025-01-29T16:26:59.848212205Z" level=info msg="StopPodSandbox for \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\"" Jan 29 16:26:59.848379 containerd[1516]: time="2025-01-29T16:26:59.848297054Z" level=info msg="TearDown network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" successfully" Jan 29 16:26:59.848379 containerd[1516]: time="2025-01-29T16:26:59.848306342Z" level=info msg="StopPodSandbox for \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" returns successfully" Jan 29 16:26:59.848687 containerd[1516]: time="2025-01-29T16:26:59.848666358Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\"" Jan 29 16:26:59.848768 containerd[1516]: time="2025-01-29T16:26:59.848740377Z" level=info msg="TearDown network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" successfully" Jan 29 16:26:59.848768 containerd[1516]: time="2025-01-29T16:26:59.848753511Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" returns successfully" Jan 29 16:26:59.849031 containerd[1516]: time="2025-01-29T16:26:59.849015393Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" Jan 29 16:26:59.849103 containerd[1516]: time="2025-01-29T16:26:59.849086366Z" level=info msg="TearDown network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" successfully" Jan 29 16:26:59.849103 containerd[1516]: time="2025-01-29T16:26:59.849099561Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" returns successfully" Jan 29 16:26:59.849282 containerd[1516]: time="2025-01-29T16:26:59.849266474Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:26:59.849349 containerd[1516]: time="2025-01-29T16:26:59.849337097Z" level=info msg="TearDown network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" successfully" Jan 29 16:26:59.849378 containerd[1516]: time="2025-01-29T16:26:59.849347516Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" returns successfully" Jan 29 16:26:59.849843 containerd[1516]: time="2025-01-29T16:26:59.849825423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:5,}" Jan 29 16:26:59.855621 kubelet[2658]: I0129 16:26:59.855588 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924" Jan 29 16:26:59.856757 containerd[1516]: time="2025-01-29T16:26:59.856480477Z" level=info msg="StopPodSandbox for \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\"" Jan 29 16:26:59.857241 containerd[1516]: time="2025-01-29T16:26:59.857077218Z" level=info msg="Ensure that sandbox 90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924 in task-service has been cleanup successfully" Jan 29 16:26:59.857409 containerd[1516]: time="2025-01-29T16:26:59.857343958Z" level=info msg="TearDown network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\" successfully" Jan 29 16:26:59.857409 containerd[1516]: time="2025-01-29T16:26:59.857362523Z" level=info msg="StopPodSandbox for \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\" returns successfully" Jan 29 16:26:59.857720 containerd[1516]: time="2025-01-29T16:26:59.857696881Z" level=info msg="StopPodSandbox for \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\"" Jan 29 16:26:59.858032 containerd[1516]: time="2025-01-29T16:26:59.857931251Z" level=info msg="TearDown network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" successfully" Jan 29 16:26:59.858032 containerd[1516]: time="2025-01-29T16:26:59.857951539Z" level=info msg="StopPodSandbox for \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" returns successfully" Jan 29 16:26:59.858262 containerd[1516]: time="2025-01-29T16:26:59.858231535Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\"" Jan 29 16:26:59.858351 containerd[1516]: time="2025-01-29T16:26:59.858309220Z" level=info msg="TearDown network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" successfully" Jan 29 16:26:59.858351 containerd[1516]: time="2025-01-29T16:26:59.858322285Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" returns successfully" Jan 29 16:26:59.858852 kubelet[2658]: I0129 16:26:59.858550 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8" Jan 29 16:26:59.859131 containerd[1516]: time="2025-01-29T16:26:59.859103883Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" Jan 29 16:26:59.859226 containerd[1516]: time="2025-01-29T16:26:59.859204632Z" level=info msg="TearDown network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" successfully" Jan 29 16:26:59.859226 containerd[1516]: time="2025-01-29T16:26:59.859222004Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" returns successfully" Jan 29 16:26:59.859278 containerd[1516]: time="2025-01-29T16:26:59.859100707Z" level=info msg="StopPodSandbox for \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\"" Jan 29 16:26:59.859783 containerd[1516]: time="2025-01-29T16:26:59.859432790Z" level=info msg="Ensure that sandbox febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8 in task-service has been cleanup successfully" Jan 29 16:26:59.859880 containerd[1516]: time="2025-01-29T16:26:59.859816540Z" level=info msg="TearDown network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\" successfully" Jan 29 16:26:59.859911 containerd[1516]: time="2025-01-29T16:26:59.859878396Z" level=info msg="StopPodSandbox for \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\" returns successfully" Jan 29 16:26:59.860240 containerd[1516]: time="2025-01-29T16:26:59.860219988Z" level=info msg="StopPodSandbox for \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\"" Jan 29 16:26:59.860318 containerd[1516]: time="2025-01-29T16:26:59.860299637Z" level=info msg="TearDown network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" successfully" Jan 29 16:26:59.860318 containerd[1516]: time="2025-01-29T16:26:59.860312521Z" level=info msg="StopPodSandbox for \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" returns successfully" Jan 29 16:26:59.860401 containerd[1516]: time="2025-01-29T16:26:59.860316789Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:26:59.860430 containerd[1516]: time="2025-01-29T16:26:59.860419642Z" level=info msg="TearDown network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" successfully" Jan 29 16:26:59.860462 containerd[1516]: time="2025-01-29T16:26:59.860429180Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" returns successfully" Jan 29 16:26:59.860972 containerd[1516]: time="2025-01-29T16:26:59.860818321Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\"" Jan 29 16:26:59.860972 containerd[1516]: time="2025-01-29T16:26:59.860904162Z" level=info msg="TearDown network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" successfully" Jan 29 16:26:59.860972 containerd[1516]: time="2025-01-29T16:26:59.860915543Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" returns successfully" Jan 29 16:26:59.861106 containerd[1516]: time="2025-01-29T16:26:59.860986887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:5,}" Jan 29 16:26:59.862487 containerd[1516]: time="2025-01-29T16:26:59.862446067Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" Jan 29 16:26:59.862589 containerd[1516]: time="2025-01-29T16:26:59.862525025Z" level=info msg="TearDown network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" successfully" Jan 29 16:26:59.862589 containerd[1516]: time="2025-01-29T16:26:59.862534062Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" returns successfully" Jan 29 16:26:59.863637 containerd[1516]: time="2025-01-29T16:26:59.863534400Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:26:59.863637 containerd[1516]: time="2025-01-29T16:26:59.863621152Z" level=info msg="TearDown network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" successfully" Jan 29 16:26:59.863637 containerd[1516]: time="2025-01-29T16:26:59.863630550Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" returns successfully" Jan 29 16:26:59.865328 containerd[1516]: time="2025-01-29T16:26:59.865280528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:5,}" Jan 29 16:26:59.877609 kubelet[2658]: I0129 16:26:59.875886 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b" Jan 29 16:26:59.877828 containerd[1516]: time="2025-01-29T16:26:59.876709383Z" level=info msg="StopPodSandbox for \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\"" Jan 29 16:26:59.878291 kubelet[2658]: I0129 16:26:59.878269 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56" Jan 29 16:26:59.878710 containerd[1516]: time="2025-01-29T16:26:59.878683519Z" level=info msg="StopPodSandbox for \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\"" Jan 29 16:26:59.881552 kubelet[2658]: I0129 16:26:59.881518 2658 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683" Jan 29 16:26:59.882472 containerd[1516]: time="2025-01-29T16:26:59.882445523Z" level=info msg="StopPodSandbox for \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\"" Jan 29 16:26:59.886913 containerd[1516]: time="2025-01-29T16:26:59.886853487Z" level=info msg="Ensure that sandbox a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b in task-service has been cleanup successfully" Jan 29 16:26:59.887194 containerd[1516]: time="2025-01-29T16:26:59.887174901Z" level=info msg="Ensure that sandbox c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56 in task-service has been cleanup successfully" Jan 29 16:26:59.887565 containerd[1516]: time="2025-01-29T16:26:59.887536199Z" level=info msg="Ensure that sandbox 5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683 in task-service has been cleanup successfully" Jan 29 16:26:59.887877 containerd[1516]: time="2025-01-29T16:26:59.887769206Z" level=info msg="TearDown network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\" successfully" Jan 29 16:26:59.887877 containerd[1516]: time="2025-01-29T16:26:59.887784745Z" level=info msg="StopPodSandbox for \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\" returns successfully" Jan 29 16:26:59.888713 containerd[1516]: time="2025-01-29T16:26:59.888497624Z" level=info msg="TearDown network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\" successfully" Jan 29 16:26:59.888713 containerd[1516]: time="2025-01-29T16:26:59.888513414Z" level=info msg="StopPodSandbox for \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\" returns successfully" Jan 29 16:26:59.888828 containerd[1516]: time="2025-01-29T16:26:59.888812705Z" level=info msg="TearDown network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\" successfully" Jan 29 16:26:59.888893 containerd[1516]: time="2025-01-29T16:26:59.888879551Z" level=info msg="StopPodSandbox for \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\" returns successfully" Jan 29 16:26:59.898052 containerd[1516]: time="2025-01-29T16:26:59.897721090Z" level=info msg="StopPodSandbox for \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\"" Jan 29 16:26:59.898052 containerd[1516]: time="2025-01-29T16:26:59.897862405Z" level=info msg="TearDown network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" successfully" Jan 29 16:26:59.898052 containerd[1516]: time="2025-01-29T16:26:59.897876581Z" level=info msg="StopPodSandbox for \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" returns successfully" Jan 29 16:26:59.898052 containerd[1516]: time="2025-01-29T16:26:59.897940702Z" level=info msg="StopPodSandbox for \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\"" Jan 29 16:26:59.898052 containerd[1516]: time="2025-01-29T16:26:59.898009470Z" level=info msg="TearDown network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" successfully" Jan 29 16:26:59.898052 containerd[1516]: time="2025-01-29T16:26:59.898020041Z" level=info msg="StopPodSandbox for \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" returns successfully" Jan 29 16:26:59.898882 containerd[1516]: time="2025-01-29T16:26:59.898864566Z" level=info msg="StopPodSandbox for \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\"" Jan 29 16:26:59.899153 containerd[1516]: time="2025-01-29T16:26:59.898989170Z" level=info msg="TearDown network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" successfully" Jan 29 16:26:59.899216 containerd[1516]: time="2025-01-29T16:26:59.899202250Z" level=info msg="StopPodSandbox for \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" returns successfully" Jan 29 16:26:59.904081 kubelet[2658]: I0129 16:26:59.902888 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tzfj6" podStartSLOduration=2.130278761 podStartE2EDuration="21.902859586s" podCreationTimestamp="2025-01-29 16:26:38 +0000 UTC" firstStartedPulling="2025-01-29 16:26:39.804237684 +0000 UTC m=+17.555755826" lastFinishedPulling="2025-01-29 16:26:59.576818519 +0000 UTC m=+37.328336651" observedRunningTime="2025-01-29 16:26:59.902288073 +0000 UTC m=+37.653806205" watchObservedRunningTime="2025-01-29 16:26:59.902859586 +0000 UTC m=+37.654377718" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904476311Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\"" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904658103Z" level=info msg="TearDown network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" successfully" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904672550Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" returns successfully" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904730439Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\"" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904834123Z" level=info msg="TearDown network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" successfully" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904846226Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" returns successfully" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904907761Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\"" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.904998301Z" level=info msg="TearDown network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" successfully" Jan 29 16:26:59.905053 containerd[1516]: time="2025-01-29T16:26:59.905010324Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" returns successfully" Jan 29 16:26:59.905782 containerd[1516]: time="2025-01-29T16:26:59.905760162Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" Jan 29 16:26:59.906049 containerd[1516]: time="2025-01-29T16:26:59.906028265Z" level=info msg="TearDown network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" successfully" Jan 29 16:26:59.906112 containerd[1516]: time="2025-01-29T16:26:59.906098807Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" returns successfully" Jan 29 16:26:59.906201 containerd[1516]: time="2025-01-29T16:26:59.906187614Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" Jan 29 16:26:59.906353 containerd[1516]: time="2025-01-29T16:26:59.906335482Z" level=info msg="TearDown network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" successfully" Jan 29 16:26:59.906434 containerd[1516]: time="2025-01-29T16:26:59.906415552Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" returns successfully" Jan 29 16:26:59.906908 containerd[1516]: time="2025-01-29T16:26:59.906871678Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" Jan 29 16:26:59.906990 containerd[1516]: time="2025-01-29T16:26:59.906973108Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:26:59.907112 containerd[1516]: time="2025-01-29T16:26:59.907096630Z" level=info msg="TearDown network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" successfully" Jan 29 16:26:59.907175 containerd[1516]: time="2025-01-29T16:26:59.907162945Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" returns successfully" Jan 29 16:26:59.907248 containerd[1516]: time="2025-01-29T16:26:59.907019355Z" level=info msg="TearDown network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" successfully" Jan 29 16:26:59.907569 containerd[1516]: time="2025-01-29T16:26:59.907551384Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" returns successfully" Jan 29 16:26:59.907678 containerd[1516]: time="2025-01-29T16:26:59.906886276Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:26:59.907901 containerd[1516]: time="2025-01-29T16:26:59.907854043Z" level=info msg="TearDown network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" successfully" Jan 29 16:26:59.908021 containerd[1516]: time="2025-01-29T16:26:59.907970220Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" returns successfully" Jan 29 16:26:59.908498 containerd[1516]: time="2025-01-29T16:26:59.908479917Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:26:59.909070 containerd[1516]: time="2025-01-29T16:26:59.909036061Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:26:59.909463 containerd[1516]: time="2025-01-29T16:26:59.909157539Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:26:59.909463 containerd[1516]: time="2025-01-29T16:26:59.908532616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:5,}" Jan 29 16:26:59.909463 containerd[1516]: time="2025-01-29T16:26:59.908698398Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:26:59.909463 containerd[1516]: time="2025-01-29T16:26:59.909450740Z" level=info msg="TearDown network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" successfully" Jan 29 16:26:59.909463 containerd[1516]: time="2025-01-29T16:26:59.909461991Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" returns successfully" Jan 29 16:26:59.910132 containerd[1516]: time="2025-01-29T16:26:59.910110759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:6,}" Jan 29 16:26:59.910358 containerd[1516]: time="2025-01-29T16:26:59.910340180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:5,}" Jan 29 16:27:00.166061 systemd-networkd[1428]: cali9589f3d6d5e: Link UP Jan 29 16:27:00.166249 systemd-networkd[1428]: cali9589f3d6d5e: Gained carrier Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:26:59.992 [INFO][4658] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.030 [INFO][4658] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0 coredns-668d6bf9bc- kube-system 137b728f-72a7-4e26-ad10-b54fc9528d91 711 0 2025-01-29 16:26:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6ssm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9589f3d6d5e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.031 [INFO][4658] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.109 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" HandleID="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Workload="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.123 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" HandleID="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Workload="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305c50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6ssm5", "timestamp":"2025-01-29 16:27:00.109239136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.123 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.123 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.123 [INFO][4730] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.128 [INFO][4730] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.134 [INFO][4730] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.139 [INFO][4730] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.141 [INFO][4730] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.143 [INFO][4730] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.143 [INFO][4730] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.144 [INFO][4730] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.149 [INFO][4730] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.154 [INFO][4730] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.155 [INFO][4730] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" host="localhost" Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.155 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:27:00.178611 containerd[1516]: 2025-01-29 16:27:00.155 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" HandleID="k8s-pod-network.d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Workload="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" Jan 29 16:27:00.179319 containerd[1516]: 2025-01-29 16:27:00.158 [INFO][4658] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"137b728f-72a7-4e26-ad10-b54fc9528d91", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6ssm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9589f3d6d5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.179319 containerd[1516]: 2025-01-29 16:27:00.158 [INFO][4658] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" Jan 29 16:27:00.179319 containerd[1516]: 2025-01-29 16:27:00.158 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9589f3d6d5e ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" Jan 29 16:27:00.179319 containerd[1516]: 2025-01-29 16:27:00.166 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" Jan 29 16:27:00.179319 containerd[1516]: 2025-01-29 16:27:00.166 [INFO][4658] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"137b728f-72a7-4e26-ad10-b54fc9528d91", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a", Pod:"coredns-668d6bf9bc-6ssm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9589f3d6d5e", MAC:"2e:38:91:7e:11:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.179319 containerd[1516]: 2025-01-29 16:27:00.176 [INFO][4658] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ssm5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6ssm5-eth0" Jan 29 16:27:00.213608 systemd[1]: run-netns-cni\x2df5c9a620\x2d26fd\x2d9a4e\x2d1388\x2dc335a20005ce.mount: Deactivated successfully. Jan 29 16:27:00.213718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8-shm.mount: Deactivated successfully. Jan 29 16:27:00.213812 systemd[1]: run-netns-cni\x2d6d51bb63\x2d885b\x2d821a\x2d5f52\x2d185d00b43cb5.mount: Deactivated successfully. Jan 29 16:27:00.213889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56-shm.mount: Deactivated successfully. Jan 29 16:27:00.235326 containerd[1516]: time="2025-01-29T16:27:00.235207415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:00.235326 containerd[1516]: time="2025-01-29T16:27:00.235282746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:00.235856 containerd[1516]: time="2025-01-29T16:27:00.235602896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.236025 containerd[1516]: time="2025-01-29T16:27:00.235881169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.267323 systemd[1]: Started cri-containerd-d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a.scope - libcontainer container d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a. Jan 29 16:27:00.289818 systemd-networkd[1428]: cali9b9ecba1cd7: Link UP Jan 29 16:27:00.294216 systemd-networkd[1428]: cali9b9ecba1cd7: Gained carrier Jan 29 16:27:00.312708 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:26:59.904 [INFO][4621] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:26:59.924 [INFO][4621] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0 coredns-668d6bf9bc- kube-system 91170ca1-19cd-4c25-a591-9a7f6b7062b6 716 0 2025-01-29 16:26:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-pjjr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b9ecba1cd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:26:59.925 [INFO][4621] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.105 [INFO][4652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" HandleID="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Workload="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.124 [INFO][4652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" HandleID="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Workload="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-pjjr9", "timestamp":"2025-01-29 16:27:00.105125112 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.124 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.156 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.156 [INFO][4652] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.226 [INFO][4652] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.229 [INFO][4652] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.240 [INFO][4652] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.242 [INFO][4652] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.247 [INFO][4652] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.247 [INFO][4652] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.251 [INFO][4652] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1 Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.256 [INFO][4652] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.263 [INFO][4652] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.263 [INFO][4652] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" host="localhost" Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.264 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:27:00.318633 containerd[1516]: 2025-01-29 16:27:00.264 [INFO][4652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" HandleID="k8s-pod-network.de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Workload="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" Jan 29 16:27:00.319640 containerd[1516]: 2025-01-29 16:27:00.274 [INFO][4621] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91170ca1-19cd-4c25-a591-9a7f6b7062b6", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-pjjr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b9ecba1cd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.319640 containerd[1516]: 2025-01-29 16:27:00.274 [INFO][4621] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" Jan 29 16:27:00.319640 containerd[1516]: 2025-01-29 16:27:00.274 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b9ecba1cd7 ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" Jan 29 16:27:00.319640 containerd[1516]: 2025-01-29 16:27:00.295 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" Jan 29 16:27:00.319640 containerd[1516]: 2025-01-29 16:27:00.301 [INFO][4621] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91170ca1-19cd-4c25-a591-9a7f6b7062b6", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1", Pod:"coredns-668d6bf9bc-pjjr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b9ecba1cd7", MAC:"0a:a5:19:dd:8e:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.319640 containerd[1516]: 2025-01-29 16:27:00.314 [INFO][4621] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pjjr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pjjr9-eth0" Jan 29 16:27:00.350208 containerd[1516]: time="2025-01-29T16:27:00.348293320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:00.350208 containerd[1516]: time="2025-01-29T16:27:00.348370805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:00.350208 containerd[1516]: time="2025-01-29T16:27:00.348483427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.350208 containerd[1516]: time="2025-01-29T16:27:00.348814458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.350447 containerd[1516]: time="2025-01-29T16:27:00.350364097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ssm5,Uid:137b728f-72a7-4e26-ad10-b54fc9528d91,Namespace:kube-system,Attempt:5,} returns sandbox id \"d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a\"" Jan 29 16:27:00.370541 systemd-networkd[1428]: calia12ff5a43a6: Link UP Jan 29 16:27:00.372735 systemd-networkd[1428]: calia12ff5a43a6: Gained carrier Jan 29 16:27:00.376166 systemd[1]: Started cri-containerd-de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1.scope - libcontainer container de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1. Jan 29 16:27:00.385549 containerd[1516]: time="2025-01-29T16:27:00.384525866Z" level=info msg="CreateContainer within sandbox \"d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.077 [INFO][4709] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.098 [INFO][4709] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0 calico-apiserver-84ffc4856f- calico-apiserver e6a6a4b6-0cc0-4539-9ece-d802ad97d93f 714 0 2025-01-29 16:26:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84ffc4856f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84ffc4856f-chfgm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia12ff5a43a6 [] []}} ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.098 [INFO][4709] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.160 [INFO][4764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" HandleID="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Workload="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.220 [INFO][4764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" HandleID="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Workload="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002535c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84ffc4856f-chfgm", "timestamp":"2025-01-29 16:27:00.160326768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.220 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.263 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.264 [INFO][4764] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.327 [INFO][4764] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.335 [INFO][4764] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.341 [INFO][4764] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.343 [INFO][4764] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.345 [INFO][4764] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.345 [INFO][4764] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.347 [INFO][4764] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.352 [INFO][4764] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.359 [INFO][4764] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.359 [INFO][4764] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" host="localhost" Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.359 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:27:00.389206 containerd[1516]: 2025-01-29 16:27:00.359 [INFO][4764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" HandleID="k8s-pod-network.76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Workload="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" Jan 29 16:27:00.389906 containerd[1516]: 2025-01-29 16:27:00.363 [INFO][4709] cni-plugin/k8s.go 386: Populated endpoint ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0", GenerateName:"calico-apiserver-84ffc4856f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6a6a4b6-0cc0-4539-9ece-d802ad97d93f", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84ffc4856f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84ffc4856f-chfgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia12ff5a43a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.389906 containerd[1516]: 2025-01-29 16:27:00.364 [INFO][4709] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" Jan 29 16:27:00.389906 containerd[1516]: 2025-01-29 16:27:00.364 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia12ff5a43a6 ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" Jan 29 16:27:00.389906 containerd[1516]: 2025-01-29 16:27:00.370 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" Jan 29 16:27:00.389906 containerd[1516]: 2025-01-29 16:27:00.370 [INFO][4709] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0", GenerateName:"calico-apiserver-84ffc4856f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6a6a4b6-0cc0-4539-9ece-d802ad97d93f", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84ffc4856f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d", Pod:"calico-apiserver-84ffc4856f-chfgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia12ff5a43a6", MAC:"76:50:fc:87:18:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.389906 containerd[1516]: 2025-01-29 16:27:00.385 [INFO][4709] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-chfgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--chfgm-eth0" Jan 29 16:27:00.399018 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:27:00.414601 containerd[1516]: time="2025-01-29T16:27:00.414558073Z" level=info msg="CreateContainer within sandbox \"d91ef75357d70023a5914da8fe35eb684160671ff9ffb341fd75e2835e1c3e5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f5b82e9d12ea273b3e738a69596e090b6ed3323ba71a67f453fce391bf6e959\"" Jan 29 16:27:00.415830 containerd[1516]: time="2025-01-29T16:27:00.415575162Z" level=info msg="StartContainer for \"5f5b82e9d12ea273b3e738a69596e090b6ed3323ba71a67f453fce391bf6e959\"" Jan 29 16:27:00.417013 containerd[1516]: time="2025-01-29T16:27:00.416886684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:00.417013 containerd[1516]: time="2025-01-29T16:27:00.416963549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:00.418552 containerd[1516]: time="2025-01-29T16:27:00.417882593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.420956 containerd[1516]: time="2025-01-29T16:27:00.418538915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.447732 containerd[1516]: time="2025-01-29T16:27:00.447678607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pjjr9,Uid:91170ca1-19cd-4c25-a591-9a7f6b7062b6,Namespace:kube-system,Attempt:5,} returns sandbox id \"de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1\"" Jan 29 16:27:00.452781 containerd[1516]: time="2025-01-29T16:27:00.452737764Z" level=info msg="CreateContainer within sandbox \"de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:27:00.455847 systemd[1]: Started cri-containerd-76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d.scope - libcontainer container 76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d. Jan 29 16:27:00.461283 systemd[1]: Started cri-containerd-5f5b82e9d12ea273b3e738a69596e090b6ed3323ba71a67f453fce391bf6e959.scope - libcontainer container 5f5b82e9d12ea273b3e738a69596e090b6ed3323ba71a67f453fce391bf6e959. Jan 29 16:27:00.474302 systemd-networkd[1428]: calicc51adb1685: Link UP Jan 29 16:27:00.475136 systemd-networkd[1428]: calicc51adb1685: Gained carrier Jan 29 16:27:00.485050 containerd[1516]: time="2025-01-29T16:27:00.484906352Z" level=info msg="CreateContainer within sandbox \"de27fa9ee66e11214269e6451a91b24b733330219cb23cd316bf9cb9b0232ed1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"196cdbb3519edd54be1072e7fea9b162684415436d2f01445d903d54b8b55659\"" Jan 29 16:27:00.487109 containerd[1516]: time="2025-01-29T16:27:00.487082026Z" level=info msg="StartContainer for \"196cdbb3519edd54be1072e7fea9b162684415436d2f01445d903d54b8b55659\"" Jan 29 16:27:00.487588 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.024 [INFO][4653] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.045 [INFO][4653] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0 calico-kube-controllers-55b88d6857- calico-system a373ce1d-d072-4edb-a73d-44d8bb96f265 713 0 2025-01-29 16:26:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55b88d6857 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55b88d6857-fnfkx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc51adb1685 [] []}} ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.045 [INFO][4653] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.146 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" HandleID="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Workload="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.220 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" HandleID="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Workload="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55b88d6857-fnfkx", "timestamp":"2025-01-29 16:27:00.1455669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.220 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.359 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.359 [INFO][4743] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.429 [INFO][4743] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.436 [INFO][4743] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.442 [INFO][4743] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.445 [INFO][4743] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.448 [INFO][4743] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.448 [INFO][4743] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.450 [INFO][4743] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.456 [INFO][4743] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.467 [INFO][4743] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.467 [INFO][4743] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" host="localhost" Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.467 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:27:00.496740 containerd[1516]: 2025-01-29 16:27:00.467 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" HandleID="k8s-pod-network.4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Workload="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" Jan 29 16:27:00.497473 containerd[1516]: 2025-01-29 16:27:00.470 [INFO][4653] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0", GenerateName:"calico-kube-controllers-55b88d6857-", Namespace:"calico-system", SelfLink:"", UID:"a373ce1d-d072-4edb-a73d-44d8bb96f265", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b88d6857", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55b88d6857-fnfkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc51adb1685", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.497473 containerd[1516]: 2025-01-29 16:27:00.470 [INFO][4653] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" Jan 29 16:27:00.497473 containerd[1516]: 2025-01-29 16:27:00.470 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc51adb1685 ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" Jan 29 16:27:00.497473 containerd[1516]: 2025-01-29 16:27:00.475 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" Jan 29 16:27:00.497473 containerd[1516]: 2025-01-29 16:27:00.476 [INFO][4653] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0", GenerateName:"calico-kube-controllers-55b88d6857-", Namespace:"calico-system", SelfLink:"", UID:"a373ce1d-d072-4edb-a73d-44d8bb96f265", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b88d6857", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd", Pod:"calico-kube-controllers-55b88d6857-fnfkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc51adb1685", MAC:"96:48:d3:de:fa:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.497473 containerd[1516]: 2025-01-29 16:27:00.493 [INFO][4653] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd" Namespace="calico-system" Pod="calico-kube-controllers-55b88d6857-fnfkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b88d6857--fnfkx-eth0" Jan 29 16:27:00.522020 systemd[1]: Started cri-containerd-196cdbb3519edd54be1072e7fea9b162684415436d2f01445d903d54b8b55659.scope - libcontainer container 196cdbb3519edd54be1072e7fea9b162684415436d2f01445d903d54b8b55659. Jan 29 16:27:00.527088 containerd[1516]: time="2025-01-29T16:27:00.527056586Z" level=info msg="StartContainer for \"5f5b82e9d12ea273b3e738a69596e090b6ed3323ba71a67f453fce391bf6e959\" returns successfully" Jan 29 16:27:00.528570 containerd[1516]: time="2025-01-29T16:27:00.527974358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:00.528570 containerd[1516]: time="2025-01-29T16:27:00.528187028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:00.528570 containerd[1516]: time="2025-01-29T16:27:00.528208318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.528570 containerd[1516]: time="2025-01-29T16:27:00.528442387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.548038 containerd[1516]: time="2025-01-29T16:27:00.548005071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-chfgm,Uid:e6a6a4b6-0cc0-4539-9ece-d802ad97d93f,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d\"" Jan 29 16:27:00.551777 containerd[1516]: time="2025-01-29T16:27:00.551744200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 16:27:00.562184 systemd[1]: Started cri-containerd-4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd.scope - libcontainer container 4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd. Jan 29 16:27:00.585507 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:27:00.590067 containerd[1516]: time="2025-01-29T16:27:00.590023828Z" level=info msg="StartContainer for \"196cdbb3519edd54be1072e7fea9b162684415436d2f01445d903d54b8b55659\" returns successfully" Jan 29 16:27:00.591595 systemd-networkd[1428]: cali764322eabde: Link UP Jan 29 16:27:00.593267 systemd-networkd[1428]: cali764322eabde: Gained carrier Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.058 [INFO][4686] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.097 [INFO][4686] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fgvfx-eth0 csi-node-driver- calico-system 6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0 581 0 2025-01-29 16:26:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fgvfx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali764322eabde [] []}} ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.098 [INFO][4686] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-eth0" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.151 [INFO][4762] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" HandleID="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Workload="localhost-k8s-csi--node--driver--fgvfx-eth0" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.220 [INFO][4762] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" HandleID="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Workload="localhost-k8s-csi--node--driver--fgvfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003159b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fgvfx", "timestamp":"2025-01-29 16:27:00.151904087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.221 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.467 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.467 [INFO][4762] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.531 [INFO][4762] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.545 [INFO][4762] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.554 [INFO][4762] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.556 [INFO][4762] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.558 [INFO][4762] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.559 [INFO][4762] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.560 [INFO][4762] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8 Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.568 [INFO][4762] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.580 [INFO][4762] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.580 [INFO][4762] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" host="localhost" Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.580 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:27:00.607936 containerd[1516]: 2025-01-29 16:27:00.580 [INFO][4762] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" HandleID="k8s-pod-network.5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Workload="localhost-k8s-csi--node--driver--fgvfx-eth0" Jan 29 16:27:00.608519 containerd[1516]: 2025-01-29 16:27:00.586 [INFO][4686] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fgvfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fgvfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali764322eabde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.608519 containerd[1516]: 2025-01-29 16:27:00.587 [INFO][4686] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-eth0" Jan 29 16:27:00.608519 containerd[1516]: 2025-01-29 16:27:00.587 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali764322eabde ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-eth0" Jan 29 16:27:00.608519 containerd[1516]: 2025-01-29 16:27:00.592 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-eth0" Jan 29 16:27:00.608519 containerd[1516]: 2025-01-29 16:27:00.593 [INFO][4686] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fgvfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8", Pod:"csi-node-driver-fgvfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali764322eabde", MAC:"de:70:5f:f5:49:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.608519 containerd[1516]: 2025-01-29 16:27:00.604 [INFO][4686] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8" Namespace="calico-system" Pod="csi-node-driver-fgvfx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvfx-eth0" Jan 29 16:27:00.621084 containerd[1516]: time="2025-01-29T16:27:00.621031266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b88d6857-fnfkx,Uid:a373ce1d-d072-4edb-a73d-44d8bb96f265,Namespace:calico-system,Attempt:5,} returns sandbox id \"4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd\"" Jan 29 16:27:00.651085 containerd[1516]: time="2025-01-29T16:27:00.650943036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:00.651085 containerd[1516]: time="2025-01-29T16:27:00.651020863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:00.651085 containerd[1516]: time="2025-01-29T16:27:00.651034258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.651363 containerd[1516]: time="2025-01-29T16:27:00.651140097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.673091 systemd[1]: Started cri-containerd-5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8.scope - libcontainer container 5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8. Jan 29 16:27:00.678379 systemd-networkd[1428]: cali2ad495ed2b6: Link UP Jan 29 16:27:00.678713 systemd-networkd[1428]: cali2ad495ed2b6: Gained carrier Jan 29 16:27:00.691727 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.059 [INFO][4695] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.098 [INFO][4695] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0 calico-apiserver-84ffc4856f- calico-apiserver ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e 715 0 2025-01-29 16:26:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84ffc4856f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84ffc4856f-b8jf8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ad495ed2b6 [] []}} ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.098 [INFO][4695] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.153 [INFO][4755] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" HandleID="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Workload="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.221 [INFO][4755] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" HandleID="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Workload="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84ffc4856f-b8jf8", "timestamp":"2025-01-29 16:27:00.153651436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.221 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.580 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.580 [INFO][4755] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.629 [INFO][4755] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.645 [INFO][4755] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.652 [INFO][4755] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.654 [INFO][4755] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.656 [INFO][4755] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.656 [INFO][4755] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.658 [INFO][4755] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.661 [INFO][4755] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.668 [INFO][4755] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.668 [INFO][4755] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" host="localhost" Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.668 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:27:00.694859 containerd[1516]: 2025-01-29 16:27:00.668 [INFO][4755] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" HandleID="k8s-pod-network.f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Workload="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" Jan 29 16:27:00.695410 containerd[1516]: 2025-01-29 16:27:00.671 [INFO][4695] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0", GenerateName:"calico-apiserver-84ffc4856f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84ffc4856f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84ffc4856f-b8jf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ad495ed2b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.695410 containerd[1516]: 2025-01-29 16:27:00.672 [INFO][4695] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" Jan 29 16:27:00.695410 containerd[1516]: 2025-01-29 16:27:00.672 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ad495ed2b6 ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" Jan 29 16:27:00.695410 containerd[1516]: 2025-01-29 16:27:00.679 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" Jan 29 16:27:00.695410 containerd[1516]: 2025-01-29 16:27:00.679 [INFO][4695] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0", GenerateName:"calico-apiserver-84ffc4856f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84ffc4856f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b", Pod:"calico-apiserver-84ffc4856f-b8jf8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ad495ed2b6", MAC:"a6:da:74:81:42:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:00.695410 containerd[1516]: 2025-01-29 16:27:00.689 [INFO][4695] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b" Namespace="calico-apiserver" Pod="calico-apiserver-84ffc4856f-b8jf8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84ffc4856f--b8jf8-eth0" Jan 29 16:27:00.703300 containerd[1516]: time="2025-01-29T16:27:00.703258784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvfx,Uid:6d3d71b5-7b0e-4f54-a59d-f9ebb4a75dd0,Namespace:calico-system,Attempt:5,} returns sandbox id \"5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8\"" Jan 29 16:27:00.717488 containerd[1516]: time="2025-01-29T16:27:00.716901364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:00.717635 containerd[1516]: time="2025-01-29T16:27:00.717470422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:00.717635 containerd[1516]: time="2025-01-29T16:27:00.717482545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.717635 containerd[1516]: time="2025-01-29T16:27:00.717567184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:00.741006 systemd[1]: Started cri-containerd-f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b.scope - libcontainer container f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b. Jan 29 16:27:00.754109 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:27:00.780341 containerd[1516]: time="2025-01-29T16:27:00.780222280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ffc4856f-b8jf8,Uid:ff9f28aa-8c77-44d4-a5fb-e8a76b9ac18e,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b\"" Jan 29 16:27:00.903512 kubelet[2658]: I0129 16:27:00.903432 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pjjr9" podStartSLOduration=31.903379201 podStartE2EDuration="31.903379201s" podCreationTimestamp="2025-01-29 16:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:27:00.902774306 +0000 UTC m=+38.654292448" watchObservedRunningTime="2025-01-29 16:27:00.903379201 +0000 UTC m=+38.654897333" Jan 29 16:27:00.914087 kubelet[2658]: I0129 16:27:00.914029 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6ssm5" podStartSLOduration=31.914011661 podStartE2EDuration="31.914011661s" podCreationTimestamp="2025-01-29 16:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:27:00.913630675 +0000 UTC m=+38.665148807" watchObservedRunningTime="2025-01-29 16:27:00.914011661 +0000 UTC m=+38.665529793" Jan 29 16:27:01.408987 systemd-networkd[1428]: calia12ff5a43a6: Gained IPv6LL Jan 29 16:27:01.664959 systemd-networkd[1428]: cali9589f3d6d5e: Gained IPv6LL Jan 29 16:27:01.859125 systemd-networkd[1428]: cali9b9ecba1cd7: Gained IPv6LL Jan 29 16:27:02.176973 systemd-networkd[1428]: calicc51adb1685: Gained IPv6LL Jan 29 16:27:02.240924 systemd-networkd[1428]: cali764322eabde: Gained IPv6LL Jan 29 16:27:02.625115 systemd-networkd[1428]: cali2ad495ed2b6: Gained IPv6LL Jan 29 16:27:02.702695 containerd[1516]: time="2025-01-29T16:27:02.702623753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:02.703299 containerd[1516]: time="2025-01-29T16:27:02.703268774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 16:27:02.704445 containerd[1516]: time="2025-01-29T16:27:02.704411829Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:02.706543 containerd[1516]: time="2025-01-29T16:27:02.706509107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:02.707183 containerd[1516]: time="2025-01-29T16:27:02.707152905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.155140763s" Jan 29 16:27:02.707218 containerd[1516]: time="2025-01-29T16:27:02.707180527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 16:27:02.707922 containerd[1516]: time="2025-01-29T16:27:02.707891220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 16:27:02.708889 containerd[1516]: time="2025-01-29T16:27:02.708832166Z" level=info msg="CreateContainer within sandbox \"76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 16:27:02.723631 containerd[1516]: time="2025-01-29T16:27:02.723597251Z" level=info msg="CreateContainer within sandbox \"76d282d264887756fe8ed3f5fb5a63335820653a3f4f1db678d68e70d1f06c9d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e4470c28ea5dd9b425a94d5be0719fdeb89ef02c1d0be3df63fe46f20ea818a\"" Jan 29 16:27:02.724165 containerd[1516]: time="2025-01-29T16:27:02.724113729Z" level=info msg="StartContainer for \"1e4470c28ea5dd9b425a94d5be0719fdeb89ef02c1d0be3df63fe46f20ea818a\"" Jan 29 16:27:02.757933 systemd[1]: Started cri-containerd-1e4470c28ea5dd9b425a94d5be0719fdeb89ef02c1d0be3df63fe46f20ea818a.scope - libcontainer container 1e4470c28ea5dd9b425a94d5be0719fdeb89ef02c1d0be3df63fe46f20ea818a. Jan 29 16:27:02.798932 containerd[1516]: time="2025-01-29T16:27:02.798891115Z" level=info msg="StartContainer for \"1e4470c28ea5dd9b425a94d5be0719fdeb89ef02c1d0be3df63fe46f20ea818a\" returns successfully" Jan 29 16:27:02.923651 kubelet[2658]: I0129 16:27:02.923230 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84ffc4856f-chfgm" podStartSLOduration=22.765453808 podStartE2EDuration="24.9232153s" podCreationTimestamp="2025-01-29 16:26:38 +0000 UTC" firstStartedPulling="2025-01-29 16:27:00.550007649 +0000 UTC m=+38.301525782" lastFinishedPulling="2025-01-29 16:27:02.707769121 +0000 UTC m=+40.459287274" observedRunningTime="2025-01-29 16:27:02.922710813 +0000 UTC m=+40.674228945" watchObservedRunningTime="2025-01-29 16:27:02.9232153 +0000 UTC m=+40.674733432" Jan 29 16:27:04.306168 systemd[1]: Started sshd@11-10.0.0.146:22-10.0.0.1:37652.service - OpenSSH per-connection server daemon (10.0.0.1:37652). Jan 29 16:27:04.500093 sshd[5409]: Accepted publickey for core from 10.0.0.1 port 37652 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:04.501892 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:04.506391 systemd-logind[1493]: New session 12 of user core. Jan 29 16:27:04.512969 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:27:04.651066 sshd[5434]: Connection closed by 10.0.0.1 port 37652 Jan 29 16:27:04.651353 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:04.656206 systemd[1]: sshd@11-10.0.0.146:22-10.0.0.1:37652.service: Deactivated successfully. Jan 29 16:27:04.658623 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:27:04.659512 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:27:04.660614 systemd-logind[1493]: Removed session 12. Jan 29 16:27:05.139537 containerd[1516]: time="2025-01-29T16:27:05.139482169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:05.140362 containerd[1516]: time="2025-01-29T16:27:05.140323728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 16:27:05.141509 containerd[1516]: time="2025-01-29T16:27:05.141482373Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:05.144051 containerd[1516]: time="2025-01-29T16:27:05.144017219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:05.144630 containerd[1516]: time="2025-01-29T16:27:05.144603280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.436687573s" Jan 29 16:27:05.144676 containerd[1516]: time="2025-01-29T16:27:05.144627725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 16:27:05.145620 containerd[1516]: time="2025-01-29T16:27:05.145468213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:27:05.152250 containerd[1516]: time="2025-01-29T16:27:05.152216828Z" level=info msg="CreateContainer within sandbox \"4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 16:27:05.166702 containerd[1516]: time="2025-01-29T16:27:05.166660934Z" level=info msg="CreateContainer within sandbox \"4dc0a4fb0a8d3f108fc1008148e8bbec47ef452c8b943ca97c637263b9c8e5bd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7c7164cd64797cb5cdc409e818a212baebb393a6e5652510c05c39560638fbbc\"" Jan 29 16:27:05.167127 containerd[1516]: time="2025-01-29T16:27:05.167097975Z" level=info msg="StartContainer for \"7c7164cd64797cb5cdc409e818a212baebb393a6e5652510c05c39560638fbbc\"" Jan 29 16:27:05.204954 systemd[1]: Started cri-containerd-7c7164cd64797cb5cdc409e818a212baebb393a6e5652510c05c39560638fbbc.scope - libcontainer container 7c7164cd64797cb5cdc409e818a212baebb393a6e5652510c05c39560638fbbc. Jan 29 16:27:05.246597 containerd[1516]: time="2025-01-29T16:27:05.246547007Z" level=info msg="StartContainer for \"7c7164cd64797cb5cdc409e818a212baebb393a6e5652510c05c39560638fbbc\" returns successfully" Jan 29 16:27:06.978334 kubelet[2658]: I0129 16:27:06.978254 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55b88d6857-fnfkx" podStartSLOduration=23.458179196 podStartE2EDuration="27.978233065s" podCreationTimestamp="2025-01-29 16:26:39 +0000 UTC" firstStartedPulling="2025-01-29 16:27:00.625283428 +0000 UTC m=+38.376801560" lastFinishedPulling="2025-01-29 16:27:05.145337297 +0000 UTC m=+42.896855429" observedRunningTime="2025-01-29 16:27:06.025123156 +0000 UTC m=+43.776641298" watchObservedRunningTime="2025-01-29 16:27:06.978233065 +0000 UTC m=+44.729751198" Jan 29 16:27:07.161829 containerd[1516]: time="2025-01-29T16:27:07.161753400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:07.162459 containerd[1516]: time="2025-01-29T16:27:07.162415903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 16:27:07.163437 containerd[1516]: time="2025-01-29T16:27:07.163400120Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:07.165440 containerd[1516]: time="2025-01-29T16:27:07.165405243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:07.166060 containerd[1516]: time="2025-01-29T16:27:07.166028032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.020537858s" Jan 29 16:27:07.166060 containerd[1516]: time="2025-01-29T16:27:07.166054742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 16:27:07.167245 containerd[1516]: time="2025-01-29T16:27:07.166975590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 16:27:07.168190 containerd[1516]: time="2025-01-29T16:27:07.168154031Z" level=info msg="CreateContainer within sandbox \"5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:27:07.192039 containerd[1516]: time="2025-01-29T16:27:07.191996562Z" level=info msg="CreateContainer within sandbox \"5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a9d72e7d065315bfd2237e0a07dadcbdb928c6bf5785a6f95352277aca9917a5\"" Jan 29 16:27:07.193207 containerd[1516]: time="2025-01-29T16:27:07.192477044Z" level=info msg="StartContainer for \"a9d72e7d065315bfd2237e0a07dadcbdb928c6bf5785a6f95352277aca9917a5\"" Jan 29 16:27:07.227928 systemd[1]: Started cri-containerd-a9d72e7d065315bfd2237e0a07dadcbdb928c6bf5785a6f95352277aca9917a5.scope - libcontainer container a9d72e7d065315bfd2237e0a07dadcbdb928c6bf5785a6f95352277aca9917a5. Jan 29 16:27:07.261920 containerd[1516]: time="2025-01-29T16:27:07.261885311Z" level=info msg="StartContainer for \"a9d72e7d065315bfd2237e0a07dadcbdb928c6bf5785a6f95352277aca9917a5\" returns successfully" Jan 29 16:27:07.542123 containerd[1516]: time="2025-01-29T16:27:07.541983195Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:07.542870 containerd[1516]: time="2025-01-29T16:27:07.542827750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 16:27:07.544714 containerd[1516]: time="2025-01-29T16:27:07.544686968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 377.683837ms" Jan 29 16:27:07.544820 containerd[1516]: time="2025-01-29T16:27:07.544713939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 16:27:07.545739 containerd[1516]: time="2025-01-29T16:27:07.545659123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:27:07.547181 containerd[1516]: time="2025-01-29T16:27:07.547153317Z" level=info msg="CreateContainer within sandbox \"f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 16:27:07.564440 containerd[1516]: time="2025-01-29T16:27:07.564261891Z" level=info msg="CreateContainer within sandbox \"f6361109dbd94961fd87374c0490ffb6f78a3faa3b4d1f847201fe50c26fa48b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1fab67cc5302be939140324edb8504026bc8652655c320a327e8d5890c0e9c1\"" Jan 29 16:27:07.565406 containerd[1516]: time="2025-01-29T16:27:07.565373687Z" level=info msg="StartContainer for \"c1fab67cc5302be939140324edb8504026bc8652655c320a327e8d5890c0e9c1\"" Jan 29 16:27:07.600978 systemd[1]: Started cri-containerd-c1fab67cc5302be939140324edb8504026bc8652655c320a327e8d5890c0e9c1.scope - libcontainer container c1fab67cc5302be939140324edb8504026bc8652655c320a327e8d5890c0e9c1. Jan 29 16:27:07.660085 containerd[1516]: time="2025-01-29T16:27:07.660033230Z" level=info msg="StartContainer for \"c1fab67cc5302be939140324edb8504026bc8652655c320a327e8d5890c0e9c1\" returns successfully" Jan 29 16:27:07.807009 kubelet[2658]: I0129 16:27:07.806878 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:27:07.936475 kubelet[2658]: I0129 16:27:07.936430 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84ffc4856f-b8jf8" podStartSLOduration=23.17235926 podStartE2EDuration="29.936414128s" podCreationTimestamp="2025-01-29 16:26:38 +0000 UTC" firstStartedPulling="2025-01-29 16:27:00.78146307 +0000 UTC m=+38.532981202" lastFinishedPulling="2025-01-29 16:27:07.545517938 +0000 UTC m=+45.297036070" observedRunningTime="2025-01-29 16:27:07.936001633 +0000 UTC m=+45.687519775" watchObservedRunningTime="2025-01-29 16:27:07.936414128 +0000 UTC m=+45.687932260" Jan 29 16:27:08.593824 kernel: bpftool[5681]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:27:08.906371 systemd-networkd[1428]: vxlan.calico: Link UP Jan 29 16:27:08.906383 systemd-networkd[1428]: vxlan.calico: Gained carrier Jan 29 16:27:08.935910 kubelet[2658]: I0129 16:27:08.935552 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:27:09.662740 containerd[1516]: time="2025-01-29T16:27:09.662603814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:09.664305 containerd[1516]: time="2025-01-29T16:27:09.664220919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 16:27:09.665422 containerd[1516]: time="2025-01-29T16:27:09.665383480Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:09.668013 containerd[1516]: time="2025-01-29T16:27:09.667956498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:09.668710 containerd[1516]: time="2025-01-29T16:27:09.668675117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.122985147s" Jan 29 16:27:09.668760 containerd[1516]: time="2025-01-29T16:27:09.668711886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 16:27:09.670493 systemd[1]: Started sshd@12-10.0.0.146:22-10.0.0.1:56286.service - OpenSSH per-connection server daemon (10.0.0.1:56286). Jan 29 16:27:09.672518 containerd[1516]: time="2025-01-29T16:27:09.672486729Z" level=info msg="CreateContainer within sandbox \"5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:27:09.694825 containerd[1516]: time="2025-01-29T16:27:09.693069841Z" level=info msg="CreateContainer within sandbox \"5573ea8b04719e25e9c59a16c83d91f9f2e12b7d2741f34bc80b1692daa1f1f8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4909ff3ebb5bdad0ba5ba1d5117d281f5b8db6a92eb650ec2d3d5bc71b3642e5\"" Jan 29 16:27:09.694825 containerd[1516]: time="2025-01-29T16:27:09.693706666Z" level=info msg="StartContainer for \"4909ff3ebb5bdad0ba5ba1d5117d281f5b8db6a92eb650ec2d3d5bc71b3642e5\"" Jan 29 16:27:09.722682 sshd[5811]: Accepted publickey for core from 10.0.0.1 port 56286 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:09.726218 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:09.734974 systemd[1]: Started cri-containerd-4909ff3ebb5bdad0ba5ba1d5117d281f5b8db6a92eb650ec2d3d5bc71b3642e5.scope - libcontainer container 4909ff3ebb5bdad0ba5ba1d5117d281f5b8db6a92eb650ec2d3d5bc71b3642e5. Jan 29 16:27:09.737761 systemd-logind[1493]: New session 13 of user core. Jan 29 16:27:09.739505 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:27:09.778754 containerd[1516]: time="2025-01-29T16:27:09.778696921Z" level=info msg="StartContainer for \"4909ff3ebb5bdad0ba5ba1d5117d281f5b8db6a92eb650ec2d3d5bc71b3642e5\" returns successfully" Jan 29 16:27:09.886090 sshd[5838]: Connection closed by 10.0.0.1 port 56286 Jan 29 16:27:09.887043 sshd-session[5811]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:09.900199 systemd[1]: sshd@12-10.0.0.146:22-10.0.0.1:56286.service: Deactivated successfully. Jan 29 16:27:09.902687 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:27:09.903973 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:27:09.916404 systemd[1]: Started sshd@13-10.0.0.146:22-10.0.0.1:56302.service - OpenSSH per-connection server daemon (10.0.0.1:56302). Jan 29 16:27:09.918087 systemd-logind[1493]: Removed session 13. Jan 29 16:27:09.955563 kubelet[2658]: I0129 16:27:09.955494 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fgvfx" podStartSLOduration=22.990603065 podStartE2EDuration="31.955465708s" podCreationTimestamp="2025-01-29 16:26:38 +0000 UTC" firstStartedPulling="2025-01-29 16:27:00.70471647 +0000 UTC m=+38.456234602" lastFinishedPulling="2025-01-29 16:27:09.669579113 +0000 UTC m=+47.421097245" observedRunningTime="2025-01-29 16:27:09.954943729 +0000 UTC m=+47.706462132" watchObservedRunningTime="2025-01-29 16:27:09.955465708 +0000 UTC m=+47.706983840" Jan 29 16:27:09.968198 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 56302 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:09.969910 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:09.975085 systemd-logind[1493]: New session 14 of user core. Jan 29 16:27:09.982976 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:27:10.048958 systemd-networkd[1428]: vxlan.calico: Gained IPv6LL Jan 29 16:27:10.184469 sshd[5869]: Connection closed by 10.0.0.1 port 56302 Jan 29 16:27:10.186111 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:10.198699 systemd[1]: sshd@13-10.0.0.146:22-10.0.0.1:56302.service: Deactivated successfully. Jan 29 16:27:10.200717 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:27:10.203065 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:27:10.212787 systemd[1]: Started sshd@14-10.0.0.146:22-10.0.0.1:56316.service - OpenSSH per-connection server daemon (10.0.0.1:56316). Jan 29 16:27:10.215686 systemd-logind[1493]: Removed session 14. Jan 29 16:27:10.247896 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 56316 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:10.249906 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:10.254560 systemd-logind[1493]: New session 15 of user core. Jan 29 16:27:10.264062 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:27:10.393109 kubelet[2658]: I0129 16:27:10.393056 2658 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:27:10.393109 kubelet[2658]: I0129 16:27:10.393108 2658 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:27:10.396064 sshd[5883]: Connection closed by 10.0.0.1 port 56316 Jan 29 16:27:10.396911 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:10.402707 systemd[1]: sshd@14-10.0.0.146:22-10.0.0.1:56316.service: Deactivated successfully. Jan 29 16:27:10.406267 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:27:10.409447 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:27:10.410717 systemd-logind[1493]: Removed session 15. Jan 29 16:27:15.412995 systemd[1]: Started sshd@15-10.0.0.146:22-10.0.0.1:56324.service - OpenSSH per-connection server daemon (10.0.0.1:56324). Jan 29 16:27:15.454812 sshd[5911]: Accepted publickey for core from 10.0.0.1 port 56324 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:15.456572 sshd-session[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:15.462025 systemd-logind[1493]: New session 16 of user core. Jan 29 16:27:15.475979 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:27:15.592146 sshd[5913]: Connection closed by 10.0.0.1 port 56324 Jan 29 16:27:15.592497 sshd-session[5911]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:15.596969 systemd[1]: sshd@15-10.0.0.146:22-10.0.0.1:56324.service: Deactivated successfully. Jan 29 16:27:15.599302 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:27:15.600056 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:27:15.601150 systemd-logind[1493]: Removed session 16. Jan 29 16:27:20.604579 systemd[1]: Started sshd@16-10.0.0.146:22-10.0.0.1:59610.service - OpenSSH per-connection server daemon (10.0.0.1:59610). Jan 29 16:27:20.641754 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 59610 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:20.643235 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:20.647251 systemd-logind[1493]: New session 17 of user core. Jan 29 16:27:20.656924 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:27:20.759412 sshd[5936]: Connection closed by 10.0.0.1 port 59610 Jan 29 16:27:20.759817 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:20.771640 systemd[1]: sshd@16-10.0.0.146:22-10.0.0.1:59610.service: Deactivated successfully. Jan 29 16:27:20.773601 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:27:20.775104 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:27:20.787153 systemd[1]: Started sshd@17-10.0.0.146:22-10.0.0.1:59616.service - OpenSSH per-connection server daemon (10.0.0.1:59616). Jan 29 16:27:20.788273 systemd-logind[1493]: Removed session 17. Jan 29 16:27:20.820053 sshd[5950]: Accepted publickey for core from 10.0.0.1 port 59616 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:20.821369 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:20.825431 systemd-logind[1493]: New session 18 of user core. Jan 29 16:27:20.834905 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:27:21.113397 sshd[5953]: Connection closed by 10.0.0.1 port 59616 Jan 29 16:27:21.114526 sshd-session[5950]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:21.122416 systemd[1]: sshd@17-10.0.0.146:22-10.0.0.1:59616.service: Deactivated successfully. Jan 29 16:27:21.124256 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:27:21.125602 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:27:21.134462 systemd[1]: Started sshd@18-10.0.0.146:22-10.0.0.1:59630.service - OpenSSH per-connection server daemon (10.0.0.1:59630). Jan 29 16:27:21.135290 systemd-logind[1493]: Removed session 18. Jan 29 16:27:21.168756 sshd[5964]: Accepted publickey for core from 10.0.0.1 port 59630 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:21.170065 sshd-session[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:21.174084 systemd-logind[1493]: New session 19 of user core. Jan 29 16:27:21.183925 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:27:21.922984 sshd[5967]: Connection closed by 10.0.0.1 port 59630 Jan 29 16:27:21.924106 sshd-session[5964]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:21.937734 systemd[1]: sshd@18-10.0.0.146:22-10.0.0.1:59630.service: Deactivated successfully. Jan 29 16:27:21.941595 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:27:21.944286 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:27:21.951502 systemd[1]: Started sshd@19-10.0.0.146:22-10.0.0.1:59638.service - OpenSSH per-connection server daemon (10.0.0.1:59638). Jan 29 16:27:21.954254 systemd-logind[1493]: Removed session 19. Jan 29 16:27:21.991620 sshd[5985]: Accepted publickey for core from 10.0.0.1 port 59638 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:21.993057 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:21.997256 systemd-logind[1493]: New session 20 of user core. Jan 29 16:27:22.008943 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:27:22.213226 sshd[5988]: Connection closed by 10.0.0.1 port 59638 Jan 29 16:27:22.214028 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:22.226144 systemd[1]: sshd@19-10.0.0.146:22-10.0.0.1:59638.service: Deactivated successfully. Jan 29 16:27:22.228684 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:27:22.230369 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:27:22.240085 systemd[1]: Started sshd@20-10.0.0.146:22-10.0.0.1:59654.service - OpenSSH per-connection server daemon (10.0.0.1:59654). Jan 29 16:27:22.241070 systemd-logind[1493]: Removed session 20. Jan 29 16:27:22.273829 sshd[5998]: Accepted publickey for core from 10.0.0.1 port 59654 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:22.275135 sshd-session[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:22.279217 systemd-logind[1493]: New session 21 of user core. Jan 29 16:27:22.293927 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:27:22.326805 containerd[1516]: time="2025-01-29T16:27:22.326754067Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:27:22.327218 containerd[1516]: time="2025-01-29T16:27:22.326878557Z" level=info msg="TearDown network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" successfully" Jan 29 16:27:22.327218 containerd[1516]: time="2025-01-29T16:27:22.326925276Z" level=info msg="StopPodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" returns successfully" Jan 29 16:27:22.332534 containerd[1516]: time="2025-01-29T16:27:22.332490539Z" level=info msg="RemovePodSandbox for \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:27:22.349711 containerd[1516]: time="2025-01-29T16:27:22.349657952Z" level=info msg="Forcibly stopping sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\"" Jan 29 16:27:22.349883 containerd[1516]: time="2025-01-29T16:27:22.349813462Z" level=info msg="TearDown network for sandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" successfully" Jan 29 16:27:22.476731 containerd[1516]: time="2025-01-29T16:27:22.476398949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.476731 containerd[1516]: time="2025-01-29T16:27:22.476521966Z" level=info msg="RemovePodSandbox \"ae41e10fe52e2932686f3fedfb2f5a695025fc5d8e8bf0c0690850e332fe400b\" returns successfully" Jan 29 16:27:22.477326 containerd[1516]: time="2025-01-29T16:27:22.477265877Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" Jan 29 16:27:22.477565 containerd[1516]: time="2025-01-29T16:27:22.477453498Z" level=info msg="TearDown network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" successfully" Jan 29 16:27:22.477565 containerd[1516]: time="2025-01-29T16:27:22.477482994Z" level=info msg="StopPodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" returns successfully" Jan 29 16:27:22.478028 containerd[1516]: time="2025-01-29T16:27:22.477998436Z" level=info msg="RemovePodSandbox for \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" Jan 29 16:27:22.478107 containerd[1516]: time="2025-01-29T16:27:22.478029054Z" level=info msg="Forcibly stopping sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\"" Jan 29 16:27:22.478173 containerd[1516]: time="2025-01-29T16:27:22.478115841Z" level=info msg="TearDown network for sandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" successfully" Jan 29 16:27:22.482694 containerd[1516]: time="2025-01-29T16:27:22.482643268Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.482788 containerd[1516]: time="2025-01-29T16:27:22.482706450Z" level=info msg="RemovePodSandbox \"84d9d0889542104a3526293cfa212ba597fcf026dd2669620cce40a10d9cb4b7\" returns successfully" Jan 29 16:27:22.483187 containerd[1516]: time="2025-01-29T16:27:22.483162507Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\"" Jan 29 16:27:22.483281 containerd[1516]: time="2025-01-29T16:27:22.483269212Z" level=info msg="TearDown network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" successfully" Jan 29 16:27:22.483343 containerd[1516]: time="2025-01-29T16:27:22.483283308Z" level=info msg="StopPodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" returns successfully" Jan 29 16:27:22.483621 containerd[1516]: time="2025-01-29T16:27:22.483597403Z" level=info msg="RemovePodSandbox for \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\"" Jan 29 16:27:22.483679 containerd[1516]: time="2025-01-29T16:27:22.483629184Z" level=info msg="Forcibly stopping sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\"" Jan 29 16:27:22.483764 containerd[1516]: time="2025-01-29T16:27:22.483718566Z" level=info msg="TearDown network for sandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" successfully" Jan 29 16:27:22.490038 containerd[1516]: time="2025-01-29T16:27:22.490003151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.490114 containerd[1516]: time="2025-01-29T16:27:22.490061413Z" level=info msg="RemovePodSandbox \"b65a2bcad4a13488c53c0517fa01f7fbdfa2bdf7e53a9039ae6ed767cfe3129a\" returns successfully" Jan 29 16:27:22.490608 containerd[1516]: time="2025-01-29T16:27:22.490426847Z" level=info msg="StopPodSandbox for \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\"" Jan 29 16:27:22.490608 containerd[1516]: time="2025-01-29T16:27:22.490515727Z" level=info msg="TearDown network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" successfully" Jan 29 16:27:22.490608 containerd[1516]: time="2025-01-29T16:27:22.490535485Z" level=info msg="StopPodSandbox for \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" returns successfully" Jan 29 16:27:22.491455 containerd[1516]: time="2025-01-29T16:27:22.491416880Z" level=info msg="RemovePodSandbox for \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\"" Jan 29 16:27:22.491455 containerd[1516]: time="2025-01-29T16:27:22.491441207Z" level=info msg="Forcibly stopping sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\"" Jan 29 16:27:22.491670 containerd[1516]: time="2025-01-29T16:27:22.491513365Z" level=info msg="TearDown network for sandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" successfully" Jan 29 16:27:22.495705 containerd[1516]: time="2025-01-29T16:27:22.495650802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.495705 containerd[1516]: time="2025-01-29T16:27:22.495720846Z" level=info msg="RemovePodSandbox \"b39a5fed448ea46212c636fc1e0db7825942d291b5a21b6385aebba1a25b6ed2\" returns successfully" Jan 29 16:27:22.496230 containerd[1516]: time="2025-01-29T16:27:22.496190069Z" level=info msg="StopPodSandbox for \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\"" Jan 29 16:27:22.496357 containerd[1516]: time="2025-01-29T16:27:22.496330499Z" level=info msg="TearDown network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\" successfully" Jan 29 16:27:22.496357 containerd[1516]: time="2025-01-29T16:27:22.496346780Z" level=info msg="StopPodSandbox for \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\" returns successfully" Jan 29 16:27:22.496649 containerd[1516]: time="2025-01-29T16:27:22.496621859Z" level=info msg="RemovePodSandbox for \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\"" Jan 29 16:27:22.496649 containerd[1516]: time="2025-01-29T16:27:22.496643281Z" level=info msg="Forcibly stopping sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\"" Jan 29 16:27:22.496759 containerd[1516]: time="2025-01-29T16:27:22.496712153Z" level=info msg="TearDown network for sandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\" successfully" Jan 29 16:27:22.504195 containerd[1516]: time="2025-01-29T16:27:22.504118706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.504360 containerd[1516]: time="2025-01-29T16:27:22.504206174Z" level=info msg="RemovePodSandbox \"febaeb392d793426833b318448e9b47ec84c33e55b12366a2be78f679af38de8\" returns successfully" Jan 29 16:27:22.504754 containerd[1516]: time="2025-01-29T16:27:22.504712528Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:27:22.504852 containerd[1516]: time="2025-01-29T16:27:22.504825996Z" level=info msg="TearDown network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" successfully" Jan 29 16:27:22.504852 containerd[1516]: time="2025-01-29T16:27:22.504841977Z" level=info msg="StopPodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" returns successfully" Jan 29 16:27:22.505082 containerd[1516]: time="2025-01-29T16:27:22.505058003Z" level=info msg="RemovePodSandbox for \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:27:22.505082 containerd[1516]: time="2025-01-29T16:27:22.505079524Z" level=info msg="Forcibly stopping sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\"" Jan 29 16:27:22.505174 containerd[1516]: time="2025-01-29T16:27:22.505144038Z" level=info msg="TearDown network for sandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" successfully" Jan 29 16:27:22.509568 containerd[1516]: time="2025-01-29T16:27:22.509393570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.509568 containerd[1516]: time="2025-01-29T16:27:22.509469335Z" level=info msg="RemovePodSandbox \"71952252c10e8e3f4e8653cf00af5156ed103d1c8deff124212d918cd00946be\" returns successfully" Jan 29 16:27:22.509900 containerd[1516]: time="2025-01-29T16:27:22.509867070Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" Jan 29 16:27:22.510021 containerd[1516]: time="2025-01-29T16:27:22.509992612Z" level=info msg="TearDown network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" successfully" Jan 29 16:27:22.510021 containerd[1516]: time="2025-01-29T16:27:22.510009134Z" level=info msg="StopPodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" returns successfully" Jan 29 16:27:22.510418 containerd[1516]: time="2025-01-29T16:27:22.510380067Z" level=info msg="RemovePodSandbox for \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" Jan 29 16:27:22.510418 containerd[1516]: time="2025-01-29T16:27:22.510408642Z" level=info msg="Forcibly stopping sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\"" Jan 29 16:27:22.510637 containerd[1516]: time="2025-01-29T16:27:22.510512952Z" level=info msg="TearDown network for sandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" successfully" Jan 29 16:27:22.520914 containerd[1516]: time="2025-01-29T16:27:22.520454430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.520914 containerd[1516]: time="2025-01-29T16:27:22.520522000Z" level=info msg="RemovePodSandbox \"2386c5b8439259c12198c7d26114646022406041a264923c7313443278eb4e81\" returns successfully" Jan 29 16:27:22.521073 containerd[1516]: time="2025-01-29T16:27:22.521000731Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\"" Jan 29 16:27:22.521164 containerd[1516]: time="2025-01-29T16:27:22.521107937Z" level=info msg="TearDown network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" successfully" Jan 29 16:27:22.521164 containerd[1516]: time="2025-01-29T16:27:22.521126933Z" level=info msg="StopPodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" returns successfully" Jan 29 16:27:22.521591 containerd[1516]: time="2025-01-29T16:27:22.521540188Z" level=info msg="RemovePodSandbox for \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\"" Jan 29 16:27:22.521653 containerd[1516]: time="2025-01-29T16:27:22.521612277Z" level=info msg="Forcibly stopping sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\"" Jan 29 16:27:22.521767 containerd[1516]: time="2025-01-29T16:27:22.521718180Z" level=info msg="TearDown network for sandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" successfully" Jan 29 16:27:22.527612 containerd[1516]: time="2025-01-29T16:27:22.527356974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.527612 containerd[1516]: time="2025-01-29T16:27:22.527471554Z" level=info msg="RemovePodSandbox \"84df84e90e61c2ca3f4a54d5368611c2b357920919f2a326648dce96c37936cf\" returns successfully" Jan 29 16:27:22.528180 containerd[1516]: time="2025-01-29T16:27:22.528146342Z" level=info msg="StopPodSandbox for \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\"" Jan 29 16:27:22.528330 containerd[1516]: time="2025-01-29T16:27:22.528251794Z" level=info msg="TearDown network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" successfully" Jan 29 16:27:22.528330 containerd[1516]: time="2025-01-29T16:27:22.528261544Z" level=info msg="StopPodSandbox for \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" returns successfully" Jan 29 16:27:22.529512 containerd[1516]: time="2025-01-29T16:27:22.528846950Z" level=info msg="RemovePodSandbox for \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\"" Jan 29 16:27:22.529512 containerd[1516]: time="2025-01-29T16:27:22.528892838Z" level=info msg="Forcibly stopping sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\"" Jan 29 16:27:22.529512 containerd[1516]: time="2025-01-29T16:27:22.529000755Z" level=info msg="TearDown network for sandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" successfully" Jan 29 16:27:22.534687 sshd[6001]: Connection closed by 10.0.0.1 port 59654 Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.536614767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.536709951Z" level=info msg="RemovePodSandbox \"93fa559139f77dedfa2f4fe398d29d2a5df2a094a45198ff37413a498d46e982\" returns successfully" Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.537107394Z" level=info msg="StopPodSandbox for \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\"" Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.537213920Z" level=info msg="TearDown network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\" successfully" Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.537225421Z" level=info msg="StopPodSandbox for \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\" returns successfully" Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.537480412Z" level=info msg="RemovePodSandbox for \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\"" Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.537498356Z" level=info msg="Forcibly stopping sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\"" Jan 29 16:27:22.538830 containerd[1516]: time="2025-01-29T16:27:22.537575946Z" level=info msg="TearDown network for sandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\" successfully" Jan 29 16:27:22.536720 sshd-session[5998]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:22.540448 systemd[1]: sshd@20-10.0.0.146:22-10.0.0.1:59654.service: Deactivated successfully. Jan 29 16:27:22.542600 containerd[1516]: time="2025-01-29T16:27:22.542407828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.542600 containerd[1516]: time="2025-01-29T16:27:22.542472552Z" level=info msg="RemovePodSandbox \"90981491dded856ca93b54d342a8b0b2622f1fdab7a82bbed5d67f8c374e1924\" returns successfully" Jan 29 16:27:22.543048 containerd[1516]: time="2025-01-29T16:27:22.543027659Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:27:22.543196 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:27:22.543283 containerd[1516]: time="2025-01-29T16:27:22.543191895Z" level=info msg="TearDown network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" successfully" Jan 29 16:27:22.543283 containerd[1516]: time="2025-01-29T16:27:22.543207565Z" level=info msg="StopPodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" returns successfully" Jan 29 16:27:22.543507 containerd[1516]: time="2025-01-29T16:27:22.543478537Z" level=info msg="RemovePodSandbox for \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:27:22.543507 containerd[1516]: time="2025-01-29T16:27:22.543499818Z" level=info msg="Forcibly stopping sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\"" Jan 29 16:27:22.543692 containerd[1516]: time="2025-01-29T16:27:22.543571455Z" level=info msg="TearDown network for sandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" successfully" Jan 29 16:27:22.546861 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:27:22.548007 systemd-logind[1493]: Removed session 21. Jan 29 16:27:22.548408 containerd[1516]: time="2025-01-29T16:27:22.548378880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.549007 containerd[1516]: time="2025-01-29T16:27:22.548433926Z" level=info msg="RemovePodSandbox \"189a71e7474e13236fb1cae0235dbd75242b9f77abf70670be1295bbdb20c2ff\" returns successfully" Jan 29 16:27:22.549540 containerd[1516]: time="2025-01-29T16:27:22.549497872Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" Jan 29 16:27:22.549851 containerd[1516]: time="2025-01-29T16:27:22.549628263Z" level=info msg="TearDown network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" successfully" Jan 29 16:27:22.549851 containerd[1516]: time="2025-01-29T16:27:22.549646968Z" level=info msg="StopPodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" returns successfully" Jan 29 16:27:22.550897 containerd[1516]: time="2025-01-29T16:27:22.550024675Z" level=info msg="RemovePodSandbox for \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" Jan 29 16:27:22.550897 containerd[1516]: time="2025-01-29T16:27:22.550049974Z" level=info msg="Forcibly stopping sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\"" Jan 29 16:27:22.550897 containerd[1516]: time="2025-01-29T16:27:22.550140377Z" level=info msg="TearDown network for sandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" successfully" Jan 29 16:27:22.554409 containerd[1516]: time="2025-01-29T16:27:22.554338982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.554409 containerd[1516]: time="2025-01-29T16:27:22.554392795Z" level=info msg="RemovePodSandbox \"1b22f5dd18f91c55953b57aa48c03510c476a7e42c3a6bb22c77927de8d49391\" returns successfully" Jan 29 16:27:22.554763 containerd[1516]: time="2025-01-29T16:27:22.554740183Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\"" Jan 29 16:27:22.554949 containerd[1516]: time="2025-01-29T16:27:22.554924658Z" level=info msg="TearDown network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" successfully" Jan 29 16:27:22.554949 containerd[1516]: time="2025-01-29T16:27:22.554940859Z" level=info msg="StopPodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" returns successfully" Jan 29 16:27:22.555270 containerd[1516]: time="2025-01-29T16:27:22.555171733Z" level=info msg="RemovePodSandbox for \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\"" Jan 29 16:27:22.555270 containerd[1516]: time="2025-01-29T16:27:22.555192763Z" level=info msg="Forcibly stopping sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\"" Jan 29 16:27:22.555433 containerd[1516]: time="2025-01-29T16:27:22.555257278Z" level=info msg="TearDown network for sandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" successfully" Jan 29 16:27:22.559220 containerd[1516]: time="2025-01-29T16:27:22.559188257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.559386 containerd[1516]: time="2025-01-29T16:27:22.559350619Z" level=info msg="RemovePodSandbox \"254039a8ada84c2fffa200eabfb31fe8df1632e2d66f3e79ac4dbd9f5c8e77a3\" returns successfully" Jan 29 16:27:22.559739 containerd[1516]: time="2025-01-29T16:27:22.559718297Z" level=info msg="StopPodSandbox for \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\"" Jan 29 16:27:22.559853 containerd[1516]: time="2025-01-29T16:27:22.559830432Z" level=info msg="TearDown network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" successfully" Jan 29 16:27:22.559853 containerd[1516]: time="2025-01-29T16:27:22.559850270Z" level=info msg="StopPodSandbox for \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" returns successfully" Jan 29 16:27:22.560118 containerd[1516]: time="2025-01-29T16:27:22.560097015Z" level=info msg="RemovePodSandbox for \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\"" Jan 29 16:27:22.560165 containerd[1516]: time="2025-01-29T16:27:22.560118376Z" level=info msg="Forcibly stopping sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\"" Jan 29 16:27:22.560203 containerd[1516]: time="2025-01-29T16:27:22.560182529Z" level=info msg="TearDown network for sandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" successfully" Jan 29 16:27:22.564245 containerd[1516]: time="2025-01-29T16:27:22.564176881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.564301 containerd[1516]: time="2025-01-29T16:27:22.564276992Z" level=info msg="RemovePodSandbox \"9a40fb37badbb086361474004990ff6e794bccf41cad3b456987fde14fc730fe\" returns successfully" Jan 29 16:27:22.564605 containerd[1516]: time="2025-01-29T16:27:22.564574245Z" level=info msg="StopPodSandbox for \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\"" Jan 29 16:27:22.564699 containerd[1516]: time="2025-01-29T16:27:22.564679497Z" level=info msg="TearDown network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\" successfully" Jan 29 16:27:22.564742 containerd[1516]: time="2025-01-29T16:27:22.564697791Z" level=info msg="StopPodSandbox for \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\" returns successfully" Jan 29 16:27:22.565038 containerd[1516]: time="2025-01-29T16:27:22.565005644Z" level=info msg="RemovePodSandbox for \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\"" Jan 29 16:27:22.565038 containerd[1516]: time="2025-01-29T16:27:22.565032816Z" level=info msg="Forcibly stopping sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\"" Jan 29 16:27:22.565256 containerd[1516]: time="2025-01-29T16:27:22.565114182Z" level=info msg="TearDown network for sandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\" successfully" Jan 29 16:27:22.569852 containerd[1516]: time="2025-01-29T16:27:22.569819681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.569936 containerd[1516]: time="2025-01-29T16:27:22.569871962Z" level=info msg="RemovePodSandbox \"b7404d590d67ff6662ab02e84662a3a29837fab5cfcecacb534cd1fdc27ef98b\" returns successfully" Jan 29 16:27:22.570277 containerd[1516]: time="2025-01-29T16:27:22.570241733Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:27:22.570419 containerd[1516]: time="2025-01-29T16:27:22.570398715Z" level=info msg="TearDown network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" successfully" Jan 29 16:27:22.570481 containerd[1516]: time="2025-01-29T16:27:22.570420858Z" level=info msg="StopPodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" returns successfully" Jan 29 16:27:22.570819 containerd[1516]: time="2025-01-29T16:27:22.570785368Z" level=info msg="RemovePodSandbox for \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:27:22.570882 containerd[1516]: time="2025-01-29T16:27:22.570820596Z" level=info msg="Forcibly stopping sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\"" Jan 29 16:27:22.570934 containerd[1516]: time="2025-01-29T16:27:22.570889909Z" level=info msg="TearDown network for sandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" successfully" Jan 29 16:27:22.575453 containerd[1516]: time="2025-01-29T16:27:22.575419300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.575549 containerd[1516]: time="2025-01-29T16:27:22.575477862Z" level=info msg="RemovePodSandbox \"dbc261688337646c2d9b228fd17924e7e326456aa198aae0f40c4afbc6cf32f8\" returns successfully" Jan 29 16:27:22.576064 containerd[1516]: time="2025-01-29T16:27:22.575886749Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" Jan 29 16:27:22.576064 containerd[1516]: time="2025-01-29T16:27:22.575996900Z" level=info msg="TearDown network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" successfully" Jan 29 16:27:22.576064 containerd[1516]: time="2025-01-29T16:27:22.576007802Z" level=info msg="StopPodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" returns successfully" Jan 29 16:27:22.576843 containerd[1516]: time="2025-01-29T16:27:22.576418882Z" level=info msg="RemovePodSandbox for \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" Jan 29 16:27:22.576843 containerd[1516]: time="2025-01-29T16:27:22.576449651Z" level=info msg="Forcibly stopping sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\"" Jan 29 16:27:22.576843 containerd[1516]: time="2025-01-29T16:27:22.576553421Z" level=info msg="TearDown network for sandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" successfully" Jan 29 16:27:22.582331 containerd[1516]: time="2025-01-29T16:27:22.582275675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.582459 containerd[1516]: time="2025-01-29T16:27:22.582357933Z" level=info msg="RemovePodSandbox \"f75a11d2e95fbb3d4415444cc34b5b48ebe132074031e529aa2dcaef927855c4\" returns successfully" Jan 29 16:27:22.582925 containerd[1516]: time="2025-01-29T16:27:22.582894866Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\"" Jan 29 16:27:22.583067 containerd[1516]: time="2025-01-29T16:27:22.583049503Z" level=info msg="TearDown network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" successfully" Jan 29 16:27:22.583099 containerd[1516]: time="2025-01-29T16:27:22.583069241Z" level=info msg="StopPodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" returns successfully" Jan 29 16:27:22.583387 containerd[1516]: time="2025-01-29T16:27:22.583351224Z" level=info msg="RemovePodSandbox for \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\"" Jan 29 16:27:22.583387 containerd[1516]: time="2025-01-29T16:27:22.583378055Z" level=info msg="Forcibly stopping sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\"" Jan 29 16:27:22.583569 containerd[1516]: time="2025-01-29T16:27:22.583461135Z" level=info msg="TearDown network for sandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" successfully" Jan 29 16:27:22.587533 containerd[1516]: time="2025-01-29T16:27:22.587496365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.587620 containerd[1516]: time="2025-01-29T16:27:22.587552342Z" level=info msg="RemovePodSandbox \"95c4d59715f9f29084f8d9c70ecbad3a4c35fbfdd2780c0358c297f7e165eab7\" returns successfully" Jan 29 16:27:22.587878 containerd[1516]: time="2025-01-29T16:27:22.587854744Z" level=info msg="StopPodSandbox for \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\"" Jan 29 16:27:22.587965 containerd[1516]: time="2025-01-29T16:27:22.587945107Z" level=info msg="TearDown network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" successfully" Jan 29 16:27:22.588042 containerd[1516]: time="2025-01-29T16:27:22.588021685Z" level=info msg="StopPodSandbox for \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" returns successfully" Jan 29 16:27:22.588290 containerd[1516]: time="2025-01-29T16:27:22.588252088Z" level=info msg="RemovePodSandbox for \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\"" Jan 29 16:27:22.588290 containerd[1516]: time="2025-01-29T16:27:22.588272888Z" level=info msg="Forcibly stopping sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\"" Jan 29 16:27:22.588375 containerd[1516]: time="2025-01-29T16:27:22.588335268Z" level=info msg="TearDown network for sandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" successfully" Jan 29 16:27:22.596819 containerd[1516]: time="2025-01-29T16:27:22.596540156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.597369 containerd[1516]: time="2025-01-29T16:27:22.597342699Z" level=info msg="RemovePodSandbox \"35b671eb5f5b34b033a6927f7fb925390b55bb9e71d81503d99b26fbd6556212\" returns successfully" Jan 29 16:27:22.598240 containerd[1516]: time="2025-01-29T16:27:22.598036884Z" level=info msg="StopPodSandbox for \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\"" Jan 29 16:27:22.598240 containerd[1516]: time="2025-01-29T16:27:22.598139111Z" level=info msg="TearDown network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\" successfully" Jan 29 16:27:22.598240 containerd[1516]: time="2025-01-29T16:27:22.598183185Z" level=info msg="StopPodSandbox for \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\" returns successfully" Jan 29 16:27:22.600809 containerd[1516]: time="2025-01-29T16:27:22.598511858Z" level=info msg="RemovePodSandbox for \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\"" Jan 29 16:27:22.600809 containerd[1516]: time="2025-01-29T16:27:22.598554941Z" level=info msg="Forcibly stopping sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\"" Jan 29 16:27:22.600809 containerd[1516]: time="2025-01-29T16:27:22.598638060Z" level=info msg="TearDown network for sandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\" successfully" Jan 29 16:27:22.606435 containerd[1516]: time="2025-01-29T16:27:22.606383305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.606667 containerd[1516]: time="2025-01-29T16:27:22.606648374Z" level=info msg="RemovePodSandbox \"c72e4f3af42b30b264fc90d37fd9d9a46f034b26d90040a20ad2bd4c6fd40c56\" returns successfully" Jan 29 16:27:22.607261 containerd[1516]: time="2025-01-29T16:27:22.607233870Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:27:22.607380 containerd[1516]: time="2025-01-29T16:27:22.607360624Z" level=info msg="TearDown network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" successfully" Jan 29 16:27:22.607380 containerd[1516]: time="2025-01-29T16:27:22.607375693Z" level=info msg="StopPodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" returns successfully" Jan 29 16:27:22.607780 containerd[1516]: time="2025-01-29T16:27:22.607755954Z" level=info msg="RemovePodSandbox for \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:27:22.607837 containerd[1516]: time="2025-01-29T16:27:22.607779589Z" level=info msg="Forcibly stopping sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\"" Jan 29 16:27:22.608106 containerd[1516]: time="2025-01-29T16:27:22.607870334Z" level=info msg="TearDown network for sandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" successfully" Jan 29 16:27:22.611994 containerd[1516]: time="2025-01-29T16:27:22.611968025Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.612075 containerd[1516]: time="2025-01-29T16:27:22.612008041Z" level=info msg="RemovePodSandbox \"33d8e09ebf9a7c12a56a8f6832cffd2e1496ae9578439f5742168ead9c1af80e\" returns successfully" Jan 29 16:27:22.612357 containerd[1516]: time="2025-01-29T16:27:22.612307878Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" Jan 29 16:27:22.612469 containerd[1516]: time="2025-01-29T16:27:22.612449571Z" level=info msg="TearDown network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" successfully" Jan 29 16:27:22.612469 containerd[1516]: time="2025-01-29T16:27:22.612465621Z" level=info msg="StopPodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" returns successfully" Jan 29 16:27:22.612767 containerd[1516]: time="2025-01-29T16:27:22.612729088Z" level=info msg="RemovePodSandbox for \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" Jan 29 16:27:22.612767 containerd[1516]: time="2025-01-29T16:27:22.612758935Z" level=info msg="Forcibly stopping sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\"" Jan 29 16:27:22.612910 containerd[1516]: time="2025-01-29T16:27:22.612866301Z" level=info msg="TearDown network for sandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" successfully" Jan 29 16:27:22.616613 containerd[1516]: time="2025-01-29T16:27:22.616578120Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.616668 containerd[1516]: time="2025-01-29T16:27:22.616616614Z" level=info msg="RemovePodSandbox \"fc52e52a7b6a40e56711a3a32cbe6935ede10fe13f07b1b4cda15a855948903f\" returns successfully" Jan 29 16:27:22.616978 containerd[1516]: time="2025-01-29T16:27:22.616949755Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\"" Jan 29 16:27:22.617045 containerd[1516]: time="2025-01-29T16:27:22.617028616Z" level=info msg="TearDown network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" successfully" Jan 29 16:27:22.617045 containerd[1516]: time="2025-01-29T16:27:22.617040168Z" level=info msg="StopPodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" returns successfully" Jan 29 16:27:22.617315 containerd[1516]: time="2025-01-29T16:27:22.617261604Z" level=info msg="RemovePodSandbox for \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\"" Jan 29 16:27:22.617315 containerd[1516]: time="2025-01-29T16:27:22.617291882Z" level=info msg="Forcibly stopping sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\"" Jan 29 16:27:22.617439 containerd[1516]: time="2025-01-29T16:27:22.617401543Z" level=info msg="TearDown network for sandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" successfully" Jan 29 16:27:22.621308 containerd[1516]: time="2025-01-29T16:27:22.621275883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.621517 containerd[1516]: time="2025-01-29T16:27:22.621490656Z" level=info msg="RemovePodSandbox \"b45519f8fd109c65bc61858960e9fc9365ff45a2587bf6f9e8891070db186aa3\" returns successfully" Jan 29 16:27:22.621784 containerd[1516]: time="2025-01-29T16:27:22.621757610Z" level=info msg="StopPodSandbox for \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\"" Jan 29 16:27:22.621916 containerd[1516]: time="2025-01-29T16:27:22.621889894Z" level=info msg="TearDown network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" successfully" Jan 29 16:27:22.621916 containerd[1516]: time="2025-01-29T16:27:22.621910814Z" level=info msg="StopPodSandbox for \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" returns successfully" Jan 29 16:27:22.623139 containerd[1516]: time="2025-01-29T16:27:22.622176956Z" level=info msg="RemovePodSandbox for \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\"" Jan 29 16:27:22.623139 containerd[1516]: time="2025-01-29T16:27:22.622207485Z" level=info msg="Forcibly stopping sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\"" Jan 29 16:27:22.623139 containerd[1516]: time="2025-01-29T16:27:22.622303670Z" level=info msg="TearDown network for sandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" successfully" Jan 29 16:27:22.634810 containerd[1516]: time="2025-01-29T16:27:22.634722593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.634894 containerd[1516]: time="2025-01-29T16:27:22.634829829Z" level=info msg="RemovePodSandbox \"46a50ccf630234d347b321c4839689c2c97f7c1a2e378ac4ae2c9f0f14107ff5\" returns successfully" Jan 29 16:27:22.635884 containerd[1516]: time="2025-01-29T16:27:22.635852896Z" level=info msg="StopPodSandbox for \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\"" Jan 29 16:27:22.636012 containerd[1516]: time="2025-01-29T16:27:22.635943250Z" level=info msg="TearDown network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\" successfully" Jan 29 16:27:22.636012 containerd[1516]: time="2025-01-29T16:27:22.635961615Z" level=info msg="StopPodSandbox for \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\" returns successfully" Jan 29 16:27:22.636182 containerd[1516]: time="2025-01-29T16:27:22.636156890Z" level=info msg="RemovePodSandbox for \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\"" Jan 29 16:27:22.636222 containerd[1516]: time="2025-01-29T16:27:22.636181708Z" level=info msg="Forcibly stopping sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\"" Jan 29 16:27:22.636300 containerd[1516]: time="2025-01-29T16:27:22.636262053Z" level=info msg="TearDown network for sandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\" successfully" Jan 29 16:27:22.665785 containerd[1516]: time="2025-01-29T16:27:22.665740140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.665917 containerd[1516]: time="2025-01-29T16:27:22.665827898Z" level=info msg="RemovePodSandbox \"5ddbd42dd7e317216ac82ac936e9f8e65d41d5555d6f722c1595840685710683\" returns successfully" Jan 29 16:27:22.666444 containerd[1516]: time="2025-01-29T16:27:22.666287212Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:27:22.666444 containerd[1516]: time="2025-01-29T16:27:22.666382886Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:27:22.666444 containerd[1516]: time="2025-01-29T16:27:22.666392434Z" level=info msg="StopPodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:27:22.666680 containerd[1516]: time="2025-01-29T16:27:22.666642755Z" level=info msg="RemovePodSandbox for \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:27:22.666680 containerd[1516]: time="2025-01-29T16:27:22.666671181Z" level=info msg="Forcibly stopping sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\"" Jan 29 16:27:22.666828 containerd[1516]: time="2025-01-29T16:27:22.666763177Z" level=info msg="TearDown network for sandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" successfully" Jan 29 16:27:22.670539 containerd[1516]: time="2025-01-29T16:27:22.670503961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.670587 containerd[1516]: time="2025-01-29T16:27:22.670551573Z" level=info msg="RemovePodSandbox \"3125095a41fd5e085a5a5b745de46ad4ff5e74d72c95dcb92b0cc9328181a518\" returns successfully" Jan 29 16:27:22.670830 containerd[1516]: time="2025-01-29T16:27:22.670786123Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:27:22.670909 containerd[1516]: time="2025-01-29T16:27:22.670898690Z" level=info msg="TearDown network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" successfully" Jan 29 16:27:22.670957 containerd[1516]: time="2025-01-29T16:27:22.670911364Z" level=info msg="StopPodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" returns successfully" Jan 29 16:27:22.671233 containerd[1516]: time="2025-01-29T16:27:22.671207744Z" level=info msg="RemovePodSandbox for \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:27:22.671290 containerd[1516]: time="2025-01-29T16:27:22.671238063Z" level=info msg="Forcibly stopping sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\"" Jan 29 16:27:22.671358 containerd[1516]: time="2025-01-29T16:27:22.671321543Z" level=info msg="TearDown network for sandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" successfully" Jan 29 16:27:22.675261 containerd[1516]: time="2025-01-29T16:27:22.675226853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.675341 containerd[1516]: time="2025-01-29T16:27:22.675268794Z" level=info msg="RemovePodSandbox \"b4744f85f1028dad8b6526ae247e51ef55f591c3f3b40043c1993df954646c50\" returns successfully" Jan 29 16:27:22.675571 containerd[1516]: time="2025-01-29T16:27:22.675546108Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" Jan 29 16:27:22.675666 containerd[1516]: time="2025-01-29T16:27:22.675646109Z" level=info msg="TearDown network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" successfully" Jan 29 16:27:22.675666 containerd[1516]: time="2025-01-29T16:27:22.675663483Z" level=info msg="StopPodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" returns successfully" Jan 29 16:27:22.676824 containerd[1516]: time="2025-01-29T16:27:22.675896501Z" level=info msg="RemovePodSandbox for \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" Jan 29 16:27:22.676824 containerd[1516]: time="2025-01-29T16:27:22.675925237Z" level=info msg="Forcibly stopping sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\"" Jan 29 16:27:22.676824 containerd[1516]: time="2025-01-29T16:27:22.676005601Z" level=info msg="TearDown network for sandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" successfully" Jan 29 16:27:22.679810 containerd[1516]: time="2025-01-29T16:27:22.679764028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.679863 containerd[1516]: time="2025-01-29T16:27:22.679821389Z" level=info msg="RemovePodSandbox \"ac8e1a109f32f296a624aa04671995a647310f65caf4cf232a77cf407fb21f38\" returns successfully" Jan 29 16:27:22.680089 containerd[1516]: time="2025-01-29T16:27:22.680052654Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\"" Jan 29 16:27:22.680171 containerd[1516]: time="2025-01-29T16:27:22.680153388Z" level=info msg="TearDown network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" successfully" Jan 29 16:27:22.680171 containerd[1516]: time="2025-01-29T16:27:22.680167414Z" level=info msg="StopPodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" returns successfully" Jan 29 16:27:22.680436 containerd[1516]: time="2025-01-29T16:27:22.680377287Z" level=info msg="RemovePodSandbox for \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\"" Jan 29 16:27:22.680436 containerd[1516]: time="2025-01-29T16:27:22.680401104Z" level=info msg="Forcibly stopping sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\"" Jan 29 16:27:22.680552 containerd[1516]: time="2025-01-29T16:27:22.680465678Z" level=info msg="TearDown network for sandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" successfully" Jan 29 16:27:22.684183 containerd[1516]: time="2025-01-29T16:27:22.684162196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.684249 containerd[1516]: time="2025-01-29T16:27:22.684198787Z" level=info msg="RemovePodSandbox \"8b114e606e5f5d658bfd47917031981d757ceacb42e7ef49e45a2e2ce4055ea5\" returns successfully" Jan 29 16:27:22.684562 containerd[1516]: time="2025-01-29T16:27:22.684516628Z" level=info msg="StopPodSandbox for \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\"" Jan 29 16:27:22.684740 containerd[1516]: time="2025-01-29T16:27:22.684719528Z" level=info msg="TearDown network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" successfully" Jan 29 16:27:22.684782 containerd[1516]: time="2025-01-29T16:27:22.684739216Z" level=info msg="StopPodSandbox for \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" returns successfully" Jan 29 16:27:22.685133 containerd[1516]: time="2025-01-29T16:27:22.685015818Z" level=info msg="RemovePodSandbox for \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\"" Jan 29 16:27:22.685133 containerd[1516]: time="2025-01-29T16:27:22.685041908Z" level=info msg="Forcibly stopping sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\"" Jan 29 16:27:22.685133 containerd[1516]: time="2025-01-29T16:27:22.685116391Z" level=info msg="TearDown network for sandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" successfully" Jan 29 16:27:22.688882 containerd[1516]: time="2025-01-29T16:27:22.688848769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.688933 containerd[1516]: time="2025-01-29T16:27:22.688888064Z" level=info msg="RemovePodSandbox \"1b15b5f17cfd830b829aa4cde457400082d5ecea676be81a1a1ec48497f2ee23\" returns successfully" Jan 29 16:27:22.689332 containerd[1516]: time="2025-01-29T16:27:22.689149046Z" level=info msg="StopPodSandbox for \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\"" Jan 29 16:27:22.689332 containerd[1516]: time="2025-01-29T16:27:22.689262164Z" level=info msg="TearDown network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\" successfully" Jan 29 16:27:22.689332 containerd[1516]: time="2025-01-29T16:27:22.689273365Z" level=info msg="StopPodSandbox for \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\" returns successfully" Jan 29 16:27:22.689490 containerd[1516]: time="2025-01-29T16:27:22.689469523Z" level=info msg="RemovePodSandbox for \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\"" Jan 29 16:27:22.689490 containerd[1516]: time="2025-01-29T16:27:22.689488690Z" level=info msg="Forcibly stopping sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\"" Jan 29 16:27:22.689606 containerd[1516]: time="2025-01-29T16:27:22.689570106Z" level=info msg="TearDown network for sandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\" successfully" Jan 29 16:27:22.693253 containerd[1516]: time="2025-01-29T16:27:22.693228331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:22.693326 containerd[1516]: time="2025-01-29T16:27:22.693268778Z" level=info msg="RemovePodSandbox \"a84fc9c03a37425583f63931e6d55539ec8541c9d5bdf7abdc8be970ec06fa4b\" returns successfully" Jan 29 16:27:27.546879 systemd[1]: Started sshd@21-10.0.0.146:22-10.0.0.1:34946.service - OpenSSH per-connection server daemon (10.0.0.1:34946). Jan 29 16:27:27.589787 sshd[6018]: Accepted publickey for core from 10.0.0.1 port 34946 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:27.591475 sshd-session[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:27.595803 systemd-logind[1493]: New session 22 of user core. Jan 29 16:27:27.604939 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:27:27.718199 sshd[6020]: Connection closed by 10.0.0.1 port 34946 Jan 29 16:27:27.718582 sshd-session[6018]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:27.722837 systemd[1]: sshd@21-10.0.0.146:22-10.0.0.1:34946.service: Deactivated successfully. Jan 29 16:27:27.724888 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:27:27.725930 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:27:27.727010 systemd-logind[1493]: Removed session 22. Jan 29 16:27:32.736604 systemd[1]: Started sshd@22-10.0.0.146:22-10.0.0.1:34952.service - OpenSSH per-connection server daemon (10.0.0.1:34952). Jan 29 16:27:32.777857 sshd[6067]: Accepted publickey for core from 10.0.0.1 port 34952 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:32.779250 sshd-session[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:32.783245 systemd-logind[1493]: New session 23 of user core. Jan 29 16:27:32.792916 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:27:32.904900 sshd[6069]: Connection closed by 10.0.0.1 port 34952 Jan 29 16:27:32.905263 sshd-session[6067]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:32.908863 systemd[1]: sshd@22-10.0.0.146:22-10.0.0.1:34952.service: Deactivated successfully. Jan 29 16:27:32.910756 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:27:32.911403 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:27:32.912229 systemd-logind[1493]: Removed session 23. Jan 29 16:27:37.918061 systemd[1]: Started sshd@23-10.0.0.146:22-10.0.0.1:46678.service - OpenSSH per-connection server daemon (10.0.0.1:46678). Jan 29 16:27:37.954546 sshd[6102]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:37.955894 sshd-session[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:37.959897 systemd-logind[1493]: New session 24 of user core. Jan 29 16:27:37.971939 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:27:38.083461 sshd[6104]: Connection closed by 10.0.0.1 port 46678 Jan 29 16:27:38.083939 sshd-session[6102]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:38.088704 systemd[1]: sshd@23-10.0.0.146:22-10.0.0.1:46678.service: Deactivated successfully. Jan 29 16:27:38.090764 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:27:38.091502 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:27:38.092358 systemd-logind[1493]: Removed session 24. Jan 29 16:27:43.107138 systemd[1]: Started sshd@24-10.0.0.146:22-10.0.0.1:46688.service - OpenSSH per-connection server daemon (10.0.0.1:46688). Jan 29 16:27:43.145963 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 46688 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:43.147496 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:43.151497 systemd-logind[1493]: New session 25 of user core. Jan 29 16:27:43.158950 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:27:43.267537 sshd[6119]: Connection closed by 10.0.0.1 port 46688 Jan 29 16:27:43.267907 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:43.271310 systemd[1]: sshd@24-10.0.0.146:22-10.0.0.1:46688.service: Deactivated successfully. Jan 29 16:27:43.273207 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:27:43.273934 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:27:43.274680 systemd-logind[1493]: Removed session 25. Jan 29 16:27:43.506767 kubelet[2658]: I0129 16:27:43.506735 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"