Jan 13 20:47:00.123818 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:47:00.123878 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:47:00.123903 kernel: BIOS-provided physical RAM map: Jan 13 20:47:00.123923 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:47:00.123942 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:47:00.124009 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:47:00.124032 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 20:47:00.124053 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 20:47:00.124072 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:47:00.124092 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:47:00.124112 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 20:47:00.124132 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:47:00.124151 kernel: NX (Execute Disable) protection: active Jan 13 20:47:00.124171 kernel: APIC: Static calls initialized Jan 13 20:47:00.124201 kernel: SMBIOS 3.0.0 present. Jan 13 20:47:00.124222 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 20:47:00.124242 kernel: Hypervisor detected: KVM Jan 13 20:47:00.124263 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:47:00.124283 kernel: kvm-clock: using sched offset of 3905196624 cycles Jan 13 20:47:00.124310 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:47:00.124331 kernel: tsc: Detected 1996.249 MHz processor Jan 13 20:47:00.124353 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:47:00.124375 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:47:00.124396 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 20:47:00.124418 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:47:00.124439 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:47:00.124460 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 20:47:00.124481 kernel: ACPI: Early table checksum verification disabled Jan 13 20:47:00.124506 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 20:47:00.124527 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:47:00.124548 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:47:00.124569 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:47:00.124591 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 20:47:00.124611 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:47:00.124632 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:47:00.124653 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 20:47:00.124674 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 20:47:00.124700 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 20:47:00.124721 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 20:47:00.124742 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 20:47:00.124770 kernel: No NUMA configuration found Jan 13 20:47:00.124792 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 20:47:00.124814 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Jan 13 20:47:00.124840 kernel: Zone ranges: Jan 13 20:47:00.124862 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:47:00.124884 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:47:00.124906 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:47:00.124928 kernel: Movable zone start for each node Jan 13 20:47:00.124949 kernel: Early memory node ranges Jan 13 20:47:00.125021 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:47:00.125044 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 20:47:00.125072 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:47:00.125094 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 20:47:00.125116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:47:00.125137 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:47:00.125160 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 20:47:00.125182 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:47:00.125204 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:47:00.125226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:47:00.125248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:47:00.125274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:47:00.125297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:47:00.125318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:47:00.125340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:47:00.125362 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:47:00.125384 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:47:00.125406 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:47:00.125428 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 20:47:00.125450 kernel: Booting paravirtualized kernel on KVM Jan 13 20:47:00.125476 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:47:00.125499 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:47:00.125521 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:47:00.125542 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:47:00.125632 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:47:00.125658 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 20:47:00.125684 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:47:00.125708 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:47:00.125736 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:47:00.125758 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:47:00.125780 kernel: Fallback order for Node 0: 0 Jan 13 20:47:00.125802 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 20:47:00.125824 kernel: Policy zone: Normal Jan 13 20:47:00.125845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:47:00.125867 kernel: software IO TLB: area num 2. Jan 13 20:47:00.125890 kernel: Memory: 3964168K/4193772K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 229344K reserved, 0K cma-reserved) Jan 13 20:47:00.125939 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:47:00.126092 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:47:00.126118 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:47:00.126140 kernel: Dynamic Preempt: voluntary Jan 13 20:47:00.126162 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:47:00.126192 kernel: rcu: RCU event tracing is enabled. Jan 13 20:47:00.126216 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:47:00.126238 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:47:00.126261 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:47:00.126283 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:47:00.126304 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:47:00.126332 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:47:00.126354 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:47:00.126376 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:47:00.126398 kernel: Console: colour VGA+ 80x25 Jan 13 20:47:00.126419 kernel: printk: console [tty0] enabled Jan 13 20:47:00.126441 kernel: printk: console [ttyS0] enabled Jan 13 20:47:00.126463 kernel: ACPI: Core revision 20230628 Jan 13 20:47:00.126485 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:47:00.126507 kernel: x2apic enabled Jan 13 20:47:00.126533 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:47:00.126555 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:47:00.126577 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:47:00.126599 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 20:47:00.126621 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 20:47:00.126643 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 20:47:00.126665 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:47:00.126687 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:47:00.126709 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:47:00.126735 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:47:00.126757 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:47:00.126779 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 20:47:00.126801 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:47:00.126836 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:47:00.126863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:47:00.126886 kernel: landlock: Up and running. Jan 13 20:47:00.126908 kernel: SELinux: Initializing. Jan 13 20:47:00.126931 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:47:00.126954 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:47:00.127009 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 20:47:00.127039 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:47:00.127062 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:47:00.127086 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:47:00.127109 kernel: Performance Events: AMD PMU driver. Jan 13 20:47:00.127132 kernel: ... version: 0 Jan 13 20:47:00.127159 kernel: ... bit width: 48 Jan 13 20:47:00.129038 kernel: ... generic registers: 4 Jan 13 20:47:00.129070 kernel: ... value mask: 0000ffffffffffff Jan 13 20:47:00.129094 kernel: ... max period: 00007fffffffffff Jan 13 20:47:00.129117 kernel: ... fixed-purpose events: 0 Jan 13 20:47:00.129140 kernel: ... event mask: 000000000000000f Jan 13 20:47:00.129163 kernel: signal: max sigframe size: 1440 Jan 13 20:47:00.129186 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:47:00.129210 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:47:00.129241 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:47:00.129264 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:47:00.129287 kernel: .... node #0, CPUs: #1 Jan 13 20:47:00.129310 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:47:00.129333 kernel: smpboot: Max logical packages: 2 Jan 13 20:47:00.129356 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 20:47:00.129379 kernel: devtmpfs: initialized Jan 13 20:47:00.129402 kernel: x86/mm: Memory block size: 128MB Jan 13 20:47:00.129426 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:47:00.129454 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:47:00.129477 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:47:00.129500 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:47:00.129523 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:47:00.129546 kernel: audit: type=2000 audit(1736801218.854:1): state=initialized audit_enabled=0 res=1 Jan 13 20:47:00.129569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:47:00.129592 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:47:00.129615 kernel: cpuidle: using governor menu Jan 13 20:47:00.129638 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:47:00.129666 kernel: dca service started, version 1.12.1 Jan 13 20:47:00.129690 kernel: PCI: Using configuration type 1 for base access Jan 13 20:47:00.129713 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:47:00.129736 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:47:00.129759 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:47:00.129782 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:47:00.129806 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:47:00.129828 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:47:00.129851 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:47:00.129874 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:47:00.129902 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:47:00.129950 kernel: ACPI: Interpreter enabled Jan 13 20:47:00.130034 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:47:00.130058 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:47:00.130082 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:47:00.130105 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:47:00.130127 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 20:47:00.130150 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:47:00.130467 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:47:00.130724 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:47:00.130947 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:47:00.133060 kernel: acpiphp: Slot [3] registered Jan 13 20:47:00.133088 kernel: acpiphp: Slot [4] registered Jan 13 20:47:00.133111 kernel: acpiphp: Slot [5] registered Jan 13 20:47:00.133135 kernel: acpiphp: Slot [6] registered Jan 13 20:47:00.133158 kernel: acpiphp: Slot [7] registered Jan 13 20:47:00.133188 kernel: acpiphp: Slot [8] registered Jan 13 20:47:00.133211 kernel: acpiphp: Slot [9] registered Jan 13 20:47:00.133234 kernel: acpiphp: Slot [10] registered Jan 13 20:47:00.133257 kernel: acpiphp: Slot [11] registered Jan 13 20:47:00.133279 kernel: acpiphp: Slot [12] registered Jan 13 20:47:00.133302 kernel: acpiphp: Slot [13] registered Jan 13 20:47:00.133325 kernel: acpiphp: Slot [14] registered Jan 13 20:47:00.133347 kernel: acpiphp: Slot [15] registered Jan 13 20:47:00.133370 kernel: acpiphp: Slot [16] registered Jan 13 20:47:00.133398 kernel: acpiphp: Slot [17] registered Jan 13 20:47:00.133420 kernel: acpiphp: Slot [18] registered Jan 13 20:47:00.133443 kernel: acpiphp: Slot [19] registered Jan 13 20:47:00.133465 kernel: acpiphp: Slot [20] registered Jan 13 20:47:00.133488 kernel: acpiphp: Slot [21] registered Jan 13 20:47:00.133510 kernel: acpiphp: Slot [22] registered Jan 13 20:47:00.133533 kernel: acpiphp: Slot [23] registered Jan 13 20:47:00.133556 kernel: acpiphp: Slot [24] registered Jan 13 20:47:00.133578 kernel: acpiphp: Slot [25] registered Jan 13 20:47:00.133601 kernel: acpiphp: Slot [26] registered Jan 13 20:47:00.133627 kernel: acpiphp: Slot [27] registered Jan 13 20:47:00.133650 kernel: acpiphp: Slot [28] registered Jan 13 20:47:00.133673 kernel: acpiphp: Slot [29] registered Jan 13 20:47:00.133696 kernel: acpiphp: Slot [30] registered Jan 13 20:47:00.133718 kernel: acpiphp: Slot [31] registered Jan 13 20:47:00.133741 kernel: PCI host bridge to bus 0000:00 Jan 13 20:47:00.134059 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:47:00.134275 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:47:00.134490 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:47:00.134691 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:47:00.134891 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 20:47:00.136228 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:47:00.136498 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:47:00.136744 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:47:00.137043 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 20:47:00.137281 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 20:47:00.137508 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 20:47:00.137734 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 20:47:00.141066 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 20:47:00.141323 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 20:47:00.141571 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:47:00.141813 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 20:47:00.142119 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 20:47:00.142365 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 20:47:00.142596 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 20:47:00.142824 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 20:47:00.143284 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 20:47:00.144184 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 20:47:00.144320 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:47:00.144429 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:47:00.144527 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 20:47:00.144622 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 20:47:00.144717 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 20:47:00.144809 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 20:47:00.144911 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:47:00.145054 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:47:00.145147 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 20:47:00.145237 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 20:47:00.145338 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 20:47:00.145432 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 20:47:00.145523 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 20:47:00.145622 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:47:00.145721 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 20:47:00.145812 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 20:47:00.145902 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 20:47:00.145930 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:47:00.145940 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:47:00.145950 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:47:00.153047 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:47:00.153063 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:47:00.153073 kernel: iommu: Default domain type: Translated Jan 13 20:47:00.153082 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:47:00.153092 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:47:00.153101 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:47:00.153111 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:47:00.153120 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 20:47:00.153223 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 20:47:00.153316 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 20:47:00.153414 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:47:00.153428 kernel: vgaarb: loaded Jan 13 20:47:00.153437 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:47:00.153447 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:47:00.153457 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:47:00.153466 kernel: pnp: PnP ACPI init Jan 13 20:47:00.153562 kernel: pnp 00:03: [dma 2] Jan 13 20:47:00.153577 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:47:00.153587 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:47:00.153600 kernel: NET: Registered PF_INET protocol family Jan 13 20:47:00.153611 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:47:00.153626 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:47:00.153638 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:47:00.153651 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:47:00.153665 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:47:00.153681 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:47:00.153695 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:47:00.153711 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:47:00.153721 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:47:00.153730 kernel: NET: Registered PF_XDP protocol family Jan 13 20:47:00.153825 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:47:00.153921 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:47:00.154029 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:47:00.154110 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 20:47:00.154192 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 20:47:00.154286 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 20:47:00.154386 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:47:00.154401 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:47:00.154410 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:47:00.154420 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 20:47:00.154429 kernel: Initialise system trusted keyrings Jan 13 20:47:00.154439 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:47:00.154448 kernel: Key type asymmetric registered Jan 13 20:47:00.154457 kernel: Asymmetric key parser 'x509' registered Jan 13 20:47:00.154470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:47:00.154480 kernel: io scheduler mq-deadline registered Jan 13 20:47:00.154489 kernel: io scheduler kyber registered Jan 13 20:47:00.154498 kernel: io scheduler bfq registered Jan 13 20:47:00.154508 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:47:00.154518 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 20:47:00.154528 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:47:00.154537 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:47:00.154547 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:47:00.154558 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:47:00.154567 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:47:00.154577 kernel: random: crng init done Jan 13 20:47:00.154586 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:47:00.154596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:47:00.154605 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:47:00.154697 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:47:00.154712 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:47:00.154792 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:47:00.154879 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:46:59 UTC (1736801219) Jan 13 20:47:00.154986 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:47:00.155001 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:47:00.155011 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:47:00.155020 kernel: Segment Routing with IPv6 Jan 13 20:47:00.155030 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:47:00.155039 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:47:00.155048 kernel: Key type dns_resolver registered Jan 13 20:47:00.155061 kernel: IPI shorthand broadcast: enabled Jan 13 20:47:00.155071 kernel: sched_clock: Marking stable (1013007295, 174721544)->(1225370141, -37641302) Jan 13 20:47:00.155080 kernel: registered taskstats version 1 Jan 13 20:47:00.155089 kernel: Loading compiled-in X.509 certificates Jan 13 20:47:00.155099 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:47:00.155108 kernel: Key type .fscrypt registered Jan 13 20:47:00.155117 kernel: Key type fscrypt-provisioning registered Jan 13 20:47:00.155127 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:47:00.155136 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:47:00.155147 kernel: ima: No architecture policies found Jan 13 20:47:00.155157 kernel: clk: Disabling unused clocks Jan 13 20:47:00.155166 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:47:00.155176 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:47:00.155185 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:47:00.155194 kernel: Run /init as init process Jan 13 20:47:00.155204 kernel: with arguments: Jan 13 20:47:00.155213 kernel: /init Jan 13 20:47:00.155222 kernel: with environment: Jan 13 20:47:00.155233 kernel: HOME=/ Jan 13 20:47:00.155242 kernel: TERM=linux Jan 13 20:47:00.155251 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:47:00.155263 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:47:00.155276 systemd[1]: Detected virtualization kvm. Jan 13 20:47:00.155287 systemd[1]: Detected architecture x86-64. Jan 13 20:47:00.155297 systemd[1]: Running in initrd. Jan 13 20:47:00.155309 systemd[1]: No hostname configured, using default hostname. Jan 13 20:47:00.155320 systemd[1]: Hostname set to <localhost>. Jan 13 20:47:00.155330 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:47:00.155340 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:47:00.155351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:47:00.155361 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:47:00.155372 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:47:00.155391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:47:00.155403 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:47:00.155414 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:47:00.155426 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:47:00.155436 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:47:00.155449 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:47:00.155459 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:47:00.155469 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:47:00.155480 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:47:00.155490 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:47:00.155500 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:47:00.155510 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:47:00.155521 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:47:00.155531 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:47:00.155543 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:47:00.155554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:47:00.155564 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:47:00.155575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:47:00.155585 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:47:00.155595 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:47:00.155606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:47:00.155616 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:47:00.155626 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:47:00.155638 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:47:00.155649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:47:00.155659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:47:00.155669 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:47:00.155697 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:47:00.155725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:47:00.155736 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:47:00.155750 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:47:00.155761 systemd-journald[184]: Journal started Jan 13 20:47:00.155784 systemd-journald[184]: Runtime Journal (/run/log/journal/9f37d6f126394403a92cde22c9b991bd) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:47:00.116750 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 20:47:00.204701 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:47:00.204726 kernel: Bridge firewalling registered Jan 13 20:47:00.204739 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:47:00.165326 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 20:47:00.205352 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:47:00.206263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:47:00.207386 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:47:00.215183 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:47:00.217074 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:47:00.228091 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:47:00.230231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:47:00.233005 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:47:00.237244 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:47:00.243043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:47:00.245327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:47:00.249307 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:47:00.256068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:47:00.259894 dracut-cmdline[214]: dracut-dracut-053 Jan 13 20:47:00.263320 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:47:00.297710 systemd-resolved[224]: Positive Trust Anchors: Jan 13 20:47:00.297723 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:47:00.297765 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:47:00.300511 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 13 20:47:00.302515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:47:00.303658 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:47:00.345990 kernel: SCSI subsystem initialized Jan 13 20:47:00.356999 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:47:00.369981 kernel: iscsi: registered transport (tcp) Jan 13 20:47:00.393993 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:47:00.394047 kernel: QLogic iSCSI HBA Driver Jan 13 20:47:00.448947 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:47:00.457096 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:47:00.500714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:47:00.500779 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:47:00.502965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:47:00.548079 kernel: raid6: sse2x4 gen() 13044 MB/s Jan 13 20:47:00.566056 kernel: raid6: sse2x2 gen() 15093 MB/s Jan 13 20:47:00.584537 kernel: raid6: sse2x1 gen() 9842 MB/s Jan 13 20:47:00.584667 kernel: raid6: using algorithm sse2x2 gen() 15093 MB/s Jan 13 20:47:00.603770 kernel: raid6: .... xor() 8605 MB/s, rmw enabled Jan 13 20:47:00.603878 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 20:47:00.627032 kernel: xor: measuring software checksum speed Jan 13 20:47:00.627100 kernel: prefetch64-sse : 18486 MB/sec Jan 13 20:47:00.628056 kernel: generic_sse : 15465 MB/sec Jan 13 20:47:00.630579 kernel: xor: using function: prefetch64-sse (18486 MB/sec) Jan 13 20:47:00.808021 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:47:00.824056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:47:00.831111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:47:00.844253 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 13 20:47:00.848957 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:47:00.862267 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:47:00.891909 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 13 20:47:00.946256 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:47:00.954283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:47:01.028612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:47:01.036130 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:47:01.048043 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:47:01.049567 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:47:01.050792 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:47:01.051394 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:47:01.058108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:47:01.099835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:47:01.121150 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 20:47:01.163315 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 20:47:01.163446 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:47:01.163461 kernel: GPT:17805311 != 20971519 Jan 13 20:47:01.163474 kernel: libata version 3.00 loaded. Jan 13 20:47:01.163487 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:47:01.163507 kernel: GPT:17805311 != 20971519 Jan 13 20:47:01.163519 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:47:01.163531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:47:01.154824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:47:01.154988 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:47:01.165166 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:47:01.166339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:47:01.166395 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:47:01.166922 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:47:01.175530 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 20:47:01.203174 kernel: scsi host0: ata_piix Jan 13 20:47:01.203309 kernel: scsi host1: ata_piix Jan 13 20:47:01.203421 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 20:47:01.203436 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 20:47:01.203461 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (448) Jan 13 20:47:01.175754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:47:01.257993 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (452) Jan 13 20:47:01.221770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:47:01.258749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:47:01.269586 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:47:01.275313 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:47:01.279932 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:47:01.280507 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:47:01.289091 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:47:01.293093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:47:01.300853 disk-uuid[502]: Primary Header is updated. Jan 13 20:47:01.300853 disk-uuid[502]: Secondary Entries is updated. Jan 13 20:47:01.300853 disk-uuid[502]: Secondary Header is updated. Jan 13 20:47:01.309065 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:47:01.322425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:47:02.329098 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:47:02.330759 disk-uuid[503]: The operation has completed successfully. Jan 13 20:47:02.403500 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:47:02.403756 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:47:02.439078 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:47:02.445746 sh[523]: Success Jan 13 20:47:02.470052 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 20:47:02.553144 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:47:02.559151 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:47:02.565055 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:47:02.609875 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:47:02.610013 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:47:02.614709 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:47:02.619771 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:47:02.623362 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:47:02.644610 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:47:02.647173 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:47:02.652170 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:47:02.655292 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:47:02.687265 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:47:02.687347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:47:02.687379 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:47:02.697017 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:47:02.716343 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:47:02.716071 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:47:02.731554 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:47:02.740320 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:47:02.749763 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:47:02.755722 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:47:02.795376 systemd-networkd[705]: lo: Link UP Jan 13 20:47:02.795385 systemd-networkd[705]: lo: Gained carrier Jan 13 20:47:02.797176 systemd-networkd[705]: Enumeration completed Jan 13 20:47:02.798095 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:47:02.799421 systemd[1]: Reached target network.target - Network. Jan 13 20:47:02.799519 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:47:02.799523 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:47:02.803694 systemd-networkd[705]: eth0: Link UP Jan 13 20:47:02.803698 systemd-networkd[705]: eth0: Gained carrier Jan 13 20:47:02.803706 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:47:02.813040 systemd-networkd[705]: eth0: DHCPv4 address 172.24.4.85/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:47:02.897787 ignition[694]: Ignition 2.20.0 Jan 13 20:47:02.897810 ignition[694]: Stage: fetch-offline Jan 13 20:47:02.897866 ignition[694]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:47:02.897885 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:47:02.898117 ignition[694]: parsed url from cmdline: "" Jan 13 20:47:02.898124 ignition[694]: no config URL provided Jan 13 20:47:02.901351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:47:02.898134 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:47:02.898149 ignition[694]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:47:02.898157 ignition[694]: failed to fetch config: resource requires networking Jan 13 20:47:02.898766 ignition[694]: Ignition finished successfully Jan 13 20:47:02.909323 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:47:02.929724 ignition[716]: Ignition 2.20.0 Jan 13 20:47:02.929746 ignition[716]: Stage: fetch Jan 13 20:47:02.930109 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:47:02.930130 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:47:02.930279 ignition[716]: parsed url from cmdline: "" Jan 13 20:47:02.930286 ignition[716]: no config URL provided Jan 13 20:47:02.930296 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:47:02.930312 ignition[716]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:47:02.930435 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 20:47:02.930490 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 20:47:02.930516 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 20:47:03.064847 ignition[716]: GET result: OK Jan 13 20:47:03.064937 ignition[716]: parsing config with SHA512: 50be7a619db805eeeec6b4c35b0e8c107a3771074fd662ebddcdb341aa6cc6355bd194c000aceff53e433553e1ac0f41b4e63fe6766a9a59eeb46bf9125a5c76 Jan 13 20:47:03.071535 unknown[716]: fetched base config from "system" Jan 13 20:47:03.071558 unknown[716]: fetched base config from "system" Jan 13 20:47:03.072069 ignition[716]: fetch: fetch complete Jan 13 20:47:03.071573 unknown[716]: fetched user config from "openstack" Jan 13 20:47:03.072080 ignition[716]: fetch: fetch passed Jan 13 20:47:03.075469 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:47:03.072165 ignition[716]: Ignition finished successfully Jan 13 20:47:03.085258 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:47:03.115798 ignition[724]: Ignition 2.20.0 Jan 13 20:47:03.115833 ignition[724]: Stage: kargs Jan 13 20:47:03.116293 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:47:03.116318 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:47:03.120187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:47:03.118062 ignition[724]: kargs: kargs passed Jan 13 20:47:03.118178 ignition[724]: Ignition finished successfully Jan 13 20:47:03.130278 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:47:03.152417 ignition[730]: Ignition 2.20.0 Jan 13 20:47:03.152431 ignition[730]: Stage: disks Jan 13 20:47:03.156526 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:47:03.152626 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:47:03.152639 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:47:03.161176 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:47:03.153421 ignition[730]: disks: disks passed Jan 13 20:47:03.162902 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:47:03.153469 ignition[730]: Ignition finished successfully Jan 13 20:47:03.165367 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:47:03.167661 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:47:03.169325 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:47:03.178225 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:47:03.212932 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:47:03.230928 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:47:03.240360 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:47:03.423049 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:47:03.423031 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:47:03.423912 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:47:03.432027 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:47:03.439116 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:47:03.441487 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:47:03.443376 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 20:47:03.445812 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:47:03.445840 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:47:03.464732 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) Jan 13 20:47:03.464786 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:47:03.464818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:47:03.464848 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:47:03.455991 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:47:03.470170 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:47:03.482979 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:47:03.486302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:47:03.593293 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:47:03.606268 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:47:03.612994 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:47:03.624054 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:47:03.727814 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:47:03.736067 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:47:03.739075 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:47:03.748973 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:47:03.756016 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:47:03.787565 ignition[867]: INFO : Ignition 2.20.0 Jan 13 20:47:03.788350 ignition[867]: INFO : Stage: mount Jan 13 20:47:03.788639 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:47:03.791775 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:47:03.793658 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:47:03.793658 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:47:03.793658 ignition[867]: INFO : mount: mount passed Jan 13 20:47:03.793658 ignition[867]: INFO : Ignition finished successfully Jan 13 20:47:04.049296 systemd-networkd[705]: eth0: Gained IPv6LL Jan 13 20:47:10.702758 coreos-metadata[749]: Jan 13 20:47:10.702 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:47:10.743156 coreos-metadata[749]: Jan 13 20:47:10.743 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:47:10.761485 coreos-metadata[749]: Jan 13 20:47:10.761 INFO Fetch successful Jan 13 20:47:10.763099 coreos-metadata[749]: Jan 13 20:47:10.761 INFO wrote hostname ci-4186-1-0-1-7e01718482.novalocal to /sysroot/etc/hostname Jan 13 20:47:10.766508 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 20:47:10.766738 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 20:47:10.778280 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:47:10.806575 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:47:10.826091 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (884) Jan 13 20:47:10.838720 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:47:10.838811 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:47:10.838842 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:47:10.850050 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:47:10.855751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:47:10.897154 ignition[902]: INFO : Ignition 2.20.0 Jan 13 20:47:10.898920 ignition[902]: INFO : Stage: files Jan 13 20:47:10.898920 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:47:10.898920 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:47:10.903644 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:47:10.903644 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:47:10.903644 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:47:10.909076 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:47:10.911368 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:47:10.911368 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:47:10.910198 unknown[902]: wrote ssh authorized keys file for user: core Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:47:10.916955 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:47:11.343177 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:47:12.817502 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:47:12.821175 ignition[902]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:47:12.821175 ignition[902]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:47:12.821175 ignition[902]: INFO : files: files passed Jan 13 20:47:12.821175 ignition[902]: INFO : Ignition finished successfully Jan 13 20:47:12.820077 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:47:12.829392 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:47:12.835621 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:47:12.836629 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:47:12.836715 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:47:12.847973 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:47:12.847973 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:47:12.850830 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:47:12.851220 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:47:12.852604 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:47:12.858097 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:47:12.884549 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:47:12.884665 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:47:12.886125 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:47:12.887071 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:47:12.888207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:47:12.889096 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:47:12.904800 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:47:12.913079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:47:12.922275 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:47:12.922914 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:47:12.924181 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:47:12.925278 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:47:12.925386 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:47:12.926580 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:47:12.927322 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:47:12.928409 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:47:12.929421 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:47:12.930413 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:47:12.935436 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:47:12.936618 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:47:12.938440 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:47:12.940114 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:47:12.941909 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:47:12.943528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:47:12.943633 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:47:12.945542 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:47:12.946520 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:47:12.947970 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:47:12.950273 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:47:12.951044 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:47:12.951151 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:47:12.953493 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:47:12.953634 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:47:12.954555 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:47:12.954665 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:47:12.964160 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:47:12.966101 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:47:12.966355 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:47:12.976332 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:47:12.978361 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:47:12.979795 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:47:12.985012 ignition[954]: INFO : Ignition 2.20.0 Jan 13 20:47:12.985012 ignition[954]: INFO : Stage: umount Jan 13 20:47:12.985012 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:47:12.985012 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:47:12.985012 ignition[954]: INFO : umount: umount passed Jan 13 20:47:12.985012 ignition[954]: INFO : Ignition finished successfully Jan 13 20:47:12.982265 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:47:12.982475 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:47:12.990194 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:47:12.990835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:47:12.992560 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:47:12.992735 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:47:12.996306 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:47:12.996348 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:47:12.997152 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:47:12.997192 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:47:12.997677 systemd[1]: Stopped target network.target - Network. Jan 13 20:47:12.998133 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:47:12.998180 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:47:12.998730 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:47:13.000073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:47:13.000797 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:47:13.001447 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:47:13.002610 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:47:13.004049 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:47:13.004090 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:47:13.004813 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:47:13.004847 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:47:13.005377 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:47:13.005422 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:47:13.008081 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:47:13.008122 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:47:13.008981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:47:13.009521 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:47:13.010791 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:47:13.010877 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:47:13.015018 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:47:13.015034 systemd-networkd[705]: eth0: DHCPv6 lease lost Jan 13 20:47:13.017637 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:47:13.018638 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:47:13.020730 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:47:13.020769 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:47:13.029044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:47:13.032538 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:47:13.032588 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:47:13.033222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:47:13.034623 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:47:13.034709 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:47:13.038023 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:47:13.038121 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:47:13.043253 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:47:13.043407 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:47:13.046481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:47:13.046526 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:47:13.047726 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:47:13.047757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:47:13.048736 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:47:13.048777 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:47:13.050575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:47:13.050617 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:47:13.051812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:47:13.051852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:47:13.053040 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:47:13.053080 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:47:13.059296 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:47:13.059843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:47:13.059890 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:47:13.060413 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:47:13.060452 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:47:13.064864 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:47:13.064905 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:47:13.066203 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:47:13.066242 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:47:13.067198 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:47:13.067236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:47:13.068623 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:47:13.068719 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:47:13.069612 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:47:13.069681 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:47:13.071049 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:47:13.077094 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:47:13.193276 systemd[1]: Switching root. Jan 13 20:47:13.249237 systemd-journald[184]: Journal stopped Jan 13 20:47:15.209399 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:47:15.209465 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:47:15.209493 kernel: SELinux: policy capability open_perms=1 Jan 13 20:47:15.209508 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:47:15.209521 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:47:15.209532 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:47:15.209544 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:47:15.209555 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:47:15.209567 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:47:15.209583 systemd[1]: Successfully loaded SELinux policy in 74.824ms. Jan 13 20:47:15.209606 kernel: audit: type=1403 audit(1736801234.188:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:47:15.209623 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.966ms. Jan 13 20:47:15.209642 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:47:15.209655 systemd[1]: Detected virtualization kvm. Jan 13 20:47:15.209668 systemd[1]: Detected architecture x86-64. Jan 13 20:47:15.209681 systemd[1]: Detected first boot. Jan 13 20:47:15.209693 systemd[1]: Hostname set to <ci-4186-1-0-1-7e01718482.novalocal>. Jan 13 20:47:15.209706 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:47:15.209721 zram_generator::config[996]: No configuration found. Jan 13 20:47:15.209735 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:47:15.209748 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:47:15.209760 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:47:15.209773 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:47:15.209788 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:47:15.209801 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:47:15.209815 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:47:15.209828 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:47:15.209841 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:47:15.209853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:47:15.209883 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:47:15.209897 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:47:15.209909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:47:15.209923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:47:15.209937 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:47:15.211320 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:47:15.211384 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:47:15.211403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:47:15.211419 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:47:15.211435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:47:15.211450 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:47:15.211465 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:47:15.211487 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:47:15.211502 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:47:15.211516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:47:15.211533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:47:15.211546 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:47:15.211559 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:47:15.211572 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:47:15.211584 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:47:15.211604 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:47:15.211617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:47:15.211652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:47:15.211680 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:47:15.211693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:47:15.211706 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:47:15.211719 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:47:15.211732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:15.211745 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:47:15.211818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:47:15.211835 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:47:15.211850 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:47:15.211864 systemd[1]: Reached target machines.target - Containers. Jan 13 20:47:15.211877 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:47:15.211891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:47:15.211905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:47:15.211918 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:47:15.211935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:47:15.211949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:47:15.212004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:47:15.212020 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:47:15.212034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:47:15.212049 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:47:15.212063 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:47:15.212077 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:47:15.212090 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:47:15.212107 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:47:15.212121 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:47:15.212134 kernel: loop: module loaded Jan 13 20:47:15.212148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:47:15.212162 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:47:15.212176 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:47:15.212190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:47:15.212205 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:47:15.212219 systemd[1]: Stopped verity-setup.service. Jan 13 20:47:15.212236 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:15.212249 kernel: ACPI: bus type drm_connector registered Jan 13 20:47:15.212262 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:47:15.212276 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:47:15.212289 kernel: fuse: init (API version 7.39) Jan 13 20:47:15.212302 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:47:15.212315 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:47:15.212332 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:47:15.212346 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:47:15.212389 systemd-journald[1092]: Collecting audit messages is disabled. Jan 13 20:47:15.212420 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:47:15.212433 systemd-journald[1092]: Journal started Jan 13 20:47:15.212463 systemd-journald[1092]: Runtime Journal (/run/log/journal/9f37d6f126394403a92cde22c9b991bd) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:47:14.818709 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:47:14.841196 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:47:15.214983 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:47:14.841542 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:47:15.216813 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:47:15.217573 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:47:15.217736 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:47:15.218550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:47:15.218727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:47:15.219494 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:47:15.219646 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:47:15.220401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:47:15.220523 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:47:15.221369 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:47:15.221494 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:47:15.222361 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:47:15.222484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:47:15.223379 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:47:15.224244 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:47:15.225048 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:47:15.234697 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:47:15.242046 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:47:15.250103 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:47:15.252054 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:47:15.252098 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:47:15.254543 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:47:15.259158 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:47:15.262123 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:47:15.263116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:47:15.272110 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:47:15.275059 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:47:15.275726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:47:15.278702 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:47:15.279300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:47:15.290537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:47:15.301190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:47:15.307247 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:47:15.311739 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:47:15.314350 kernel: loop0: detected capacity change from 0 to 205544 Jan 13 20:47:15.313913 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:47:15.315620 systemd-journald[1092]: Time spent on flushing to /var/log/journal/9f37d6f126394403a92cde22c9b991bd is 41.936ms for 927 entries. Jan 13 20:47:15.315620 systemd-journald[1092]: System Journal (/var/log/journal/9f37d6f126394403a92cde22c9b991bd) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:47:15.378939 systemd-journald[1092]: Received client request to flush runtime journal. Jan 13 20:47:15.315285 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:47:15.323270 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:47:15.331174 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:47:15.332059 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:47:15.334655 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:47:15.351204 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:47:15.352129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:47:15.380643 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:47:15.382090 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:47:15.442874 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:47:15.476390 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:47:15.479924 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:47:15.500004 kernel: loop1: detected capacity change from 0 to 8 Jan 13 20:47:15.504413 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:47:15.521674 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:47:15.526997 kernel: loop2: detected capacity change from 0 to 141000 Jan 13 20:47:15.546953 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 13 20:47:15.547144 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jan 13 20:47:15.555378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:47:15.601031 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 20:47:15.700149 kernel: loop4: detected capacity change from 0 to 205544 Jan 13 20:47:15.771015 kernel: loop5: detected capacity change from 0 to 8 Jan 13 20:47:15.775010 kernel: loop6: detected capacity change from 0 to 141000 Jan 13 20:47:15.825021 kernel: loop7: detected capacity change from 0 to 138184 Jan 13 20:47:15.912853 (sd-merge)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 20:47:15.913311 (sd-merge)[1155]: Merged extensions into '/usr'. Jan 13 20:47:15.921908 systemd[1]: Reloading requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:47:15.921935 systemd[1]: Reloading... Jan 13 20:47:16.011001 zram_generator::config[1181]: No configuration found. Jan 13 20:47:16.180864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:47:16.243012 systemd[1]: Reloading finished in 320 ms. Jan 13 20:47:16.266745 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:47:16.276235 systemd[1]: Starting ensure-sysext.service... Jan 13 20:47:16.277871 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:47:16.278766 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:47:16.282191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:47:16.303398 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:47:16.303806 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:47:16.303826 systemd[1]: Reloading... Jan 13 20:47:16.304161 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:47:16.305146 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:47:16.305747 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 13 20:47:16.305897 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 13 20:47:16.328906 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jan 13 20:47:16.332750 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:47:16.332762 systemd-tmpfiles[1237]: Skipping /boot Jan 13 20:47:16.345481 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:47:16.345493 systemd-tmpfiles[1237]: Skipping /boot Jan 13 20:47:16.385023 zram_generator::config[1267]: No configuration found. Jan 13 20:47:16.533730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:47:16.593871 systemd[1]: Reloading finished in 288 ms. Jan 13 20:47:16.617415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:47:16.637136 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:47:16.648097 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:47:16.652219 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:47:16.662507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:47:16.672142 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:47:16.672883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:47:16.692827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:16.693055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:47:16.702060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:47:16.707810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:47:16.711415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:47:16.713087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:47:16.717038 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:47:16.717606 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:16.718638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:47:16.719306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:47:16.733271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:16.733514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:47:16.743047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:47:16.752739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:47:16.760301 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:47:16.761779 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:16.764521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:47:16.764714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:47:16.773039 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:47:16.777427 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:47:16.782465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:16.782713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:47:16.788298 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:47:16.791809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:47:16.792710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:47:16.792879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:47:16.796212 systemd[1]: Finished ensure-sysext.service. Jan 13 20:47:16.800881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:47:16.802793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:47:16.814203 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1355) Jan 13 20:47:16.817160 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:47:16.830129 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:47:16.830333 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:47:16.840430 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:47:16.841481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:47:16.843197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:47:16.867764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:47:16.867934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:47:16.869267 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:47:16.877218 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:47:16.881216 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:47:16.888136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:47:16.917279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:47:16.931602 augenrules[1399]: No rules Jan 13 20:47:16.933648 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:47:16.933844 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:47:16.938864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:47:16.984018 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 20:47:17.002384 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:47:17.019988 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:47:17.056082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:47:17.057496 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:47:17.079025 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:47:17.096637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:47:17.109552 systemd-resolved[1336]: Positive Trust Anchors: Jan 13 20:47:17.111218 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:47:17.109562 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:47:17.109608 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:47:17.121790 systemd-networkd[1353]: lo: Link UP Jan 13 20:47:17.121797 systemd-networkd[1353]: lo: Gained carrier Jan 13 20:47:17.125090 systemd-networkd[1353]: Enumeration completed Jan 13 20:47:17.125202 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:47:17.125739 systemd-networkd[1353]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:47:17.125745 systemd-networkd[1353]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:47:17.126636 systemd-networkd[1353]: eth0: Link UP Jan 13 20:47:17.126642 systemd-networkd[1353]: eth0: Gained carrier Jan 13 20:47:17.126657 systemd-networkd[1353]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:47:17.134139 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:47:17.143024 systemd-networkd[1353]: eth0: DHCPv4 address 172.24.4.85/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:47:17.143842 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Jan 13 20:47:17.144142 systemd-resolved[1336]: Using system hostname 'ci-4186-1-0-1-7e01718482.novalocal'. Jan 13 20:47:17.145883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:47:17.146575 systemd[1]: Reached target network.target - Network. Jan 13 20:47:17.147082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:47:17.152938 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 20:47:17.153003 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 20:47:17.158347 kernel: Console: switching to colour dummy device 80x25 Jan 13 20:47:17.158388 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:47:17.158406 kernel: [drm] features: -context_init Jan 13 20:47:17.160192 kernel: [drm] number of scanouts: 1 Jan 13 20:47:17.160229 kernel: [drm] number of cap sets: 0 Jan 13 20:47:17.162973 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 20:47:17.164308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:47:17.164513 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:47:17.167022 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 20:47:17.175021 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:47:17.178080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:47:17.182582 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:47:17.188138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:47:17.188525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:47:17.196117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:47:17.196564 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:47:17.200876 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:47:17.235599 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:47:17.290506 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:47:17.291899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:47:17.299197 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:47:17.321595 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:47:17.378457 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:47:17.476177 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:47:17.478077 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:47:17.480379 ldconfig[1124]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:47:17.480473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:47:17.491098 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:47:17.499273 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:47:17.510370 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:47:17.511701 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:47:17.511889 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:47:17.512005 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:47:17.512261 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:47:17.512433 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:47:17.512509 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:47:17.512570 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:47:17.512599 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:47:17.512659 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:47:17.516042 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:47:17.520386 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:47:17.527591 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:47:17.529710 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:47:17.533909 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:47:17.536338 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:47:17.538682 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:47:17.539006 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:47:17.546407 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:47:17.550128 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:47:17.556404 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:47:17.569806 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:47:17.577600 jq[1441]: false Jan 13 20:47:17.577901 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:47:17.580553 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:47:17.588169 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:47:17.596491 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:47:17.606183 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:47:17.620204 extend-filesystems[1444]: Found loop4 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found loop5 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found loop6 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found loop7 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda1 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda2 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda3 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found usr Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda4 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda6 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda7 Jan 13 20:47:17.626332 extend-filesystems[1444]: Found vda9 Jan 13 20:47:17.626332 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 13 20:47:17.622722 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:47:17.622285 dbus-daemon[1440]: [system] SELinux support is enabled Jan 13 20:47:17.635186 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:47:17.637190 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:47:17.641170 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:47:17.656198 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:47:17.659645 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:47:17.666482 jq[1455]: true Jan 13 20:47:17.675336 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:47:17.675506 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:47:17.675771 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:47:17.675905 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:47:17.683540 update_engine[1452]: I20250113 20:47:17.683447 1452 main.cc:92] Flatcar Update Engine starting Jan 13 20:47:17.686276 update_engine[1452]: I20250113 20:47:17.686244 1452 update_check_scheduler.cc:74] Next update check in 9m46s Jan 13 20:47:17.694490 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:47:17.695491 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:47:17.695539 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:47:17.699458 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:47:17.706158 jq[1464]: true Jan 13 20:47:17.699484 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:47:17.700066 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:47:17.708123 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:47:17.712584 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 13 20:47:17.728268 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:47:17.749930 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 20:47:17.739747 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:47:17.740559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:47:17.759295 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1327) Jan 13 20:47:17.775593 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 20:47:17.787144 systemd-logind[1449]: New seat seat0. Jan 13 20:47:17.826351 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:47:17.826371 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:47:17.826808 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:47:17.834685 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:47:17.834685 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:47:17.834685 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 20:47:17.839354 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 13 20:47:17.836369 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:47:17.836579 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:47:17.866696 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:47:17.868144 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:47:17.881064 systemd[1]: Starting sshkeys.service... Jan 13 20:47:17.899400 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:47:17.912359 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:47:17.929986 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:47:18.007051 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:47:18.037119 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:47:18.053250 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:47:18.061828 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:47:18.062067 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:47:18.076246 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:47:18.087923 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:47:18.100418 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:47:18.107193 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:47:18.110504 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:47:18.120844 containerd[1467]: time="2025-01-13T20:47:18.120762756Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:47:18.149992 containerd[1467]: time="2025-01-13T20:47:18.149931141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:47:18.152455 containerd[1467]: time="2025-01-13T20:47:18.152423946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:47:18.152522 containerd[1467]: time="2025-01-13T20:47:18.152507313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:47:18.152599 containerd[1467]: time="2025-01-13T20:47:18.152585119Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:47:18.152804 containerd[1467]: time="2025-01-13T20:47:18.152786056Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:47:18.152880 containerd[1467]: time="2025-01-13T20:47:18.152865815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153021 containerd[1467]: time="2025-01-13T20:47:18.152999877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153079 containerd[1467]: time="2025-01-13T20:47:18.153066742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153296 containerd[1467]: time="2025-01-13T20:47:18.153275303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153356 containerd[1467]: time="2025-01-13T20:47:18.153343060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153416 containerd[1467]: time="2025-01-13T20:47:18.153402041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153480 containerd[1467]: time="2025-01-13T20:47:18.153466281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153608 containerd[1467]: time="2025-01-13T20:47:18.153590985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:47:18.153876 containerd[1467]: time="2025-01-13T20:47:18.153842066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:47:18.154060 containerd[1467]: time="2025-01-13T20:47:18.154039406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:47:18.154120 containerd[1467]: time="2025-01-13T20:47:18.154106853Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:47:18.154263 containerd[1467]: time="2025-01-13T20:47:18.154245212Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:47:18.154372 containerd[1467]: time="2025-01-13T20:47:18.154355820Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:47:18.167243 containerd[1467]: time="2025-01-13T20:47:18.167218854Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:47:18.167377 containerd[1467]: time="2025-01-13T20:47:18.167361742Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:47:18.167620 containerd[1467]: time="2025-01-13T20:47:18.167539385Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:47:18.167666 containerd[1467]: time="2025-01-13T20:47:18.167625988Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:47:18.167693 containerd[1467]: time="2025-01-13T20:47:18.167666845Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:47:18.168064 containerd[1467]: time="2025-01-13T20:47:18.168021179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:47:18.168600 containerd[1467]: time="2025-01-13T20:47:18.168561102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:47:18.168853 containerd[1467]: time="2025-01-13T20:47:18.168812153Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:47:18.168953 containerd[1467]: time="2025-01-13T20:47:18.168868689Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:47:18.169044 containerd[1467]: time="2025-01-13T20:47:18.169008260Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:47:18.169080 containerd[1467]: time="2025-01-13T20:47:18.169059176Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169117 containerd[1467]: time="2025-01-13T20:47:18.169093861Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169144 containerd[1467]: time="2025-01-13T20:47:18.169125661Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169200 containerd[1467]: time="2025-01-13T20:47:18.169169603Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169246 containerd[1467]: time="2025-01-13T20:47:18.169217312Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169283 containerd[1467]: time="2025-01-13T20:47:18.169258430Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169323 containerd[1467]: time="2025-01-13T20:47:18.169290329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169351 containerd[1467]: time="2025-01-13T20:47:18.169331517Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:47:18.169411 containerd[1467]: time="2025-01-13T20:47:18.169379887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169450 containerd[1467]: time="2025-01-13T20:47:18.169422036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169487 containerd[1467]: time="2025-01-13T20:47:18.169454437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169520 containerd[1467]: time="2025-01-13T20:47:18.169487148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169546 containerd[1467]: time="2025-01-13T20:47:18.169517606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169570 containerd[1467]: time="2025-01-13T20:47:18.169550848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169613 containerd[1467]: time="2025-01-13T20:47:18.169582658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169646 containerd[1467]: time="2025-01-13T20:47:18.169623835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169701 containerd[1467]: time="2025-01-13T20:47:18.169656757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169732 containerd[1467]: time="2025-01-13T20:47:18.169699887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169755 containerd[1467]: time="2025-01-13T20:47:18.169731377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169803 containerd[1467]: time="2025-01-13T20:47:18.169769949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169831 containerd[1467]: time="2025-01-13T20:47:18.169802039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.169888 containerd[1467]: time="2025-01-13T20:47:18.169838227Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:47:18.169972 containerd[1467]: time="2025-01-13T20:47:18.169915342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.170069 containerd[1467]: time="2025-01-13T20:47:18.170030197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.170101 containerd[1467]: time="2025-01-13T20:47:18.170079550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:47:18.170207 containerd[1467]: time="2025-01-13T20:47:18.170175920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:47:18.170261 containerd[1467]: time="2025-01-13T20:47:18.170227838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:47:18.170293 containerd[1467]: time="2025-01-13T20:47:18.170258776Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:47:18.170382 containerd[1467]: time="2025-01-13T20:47:18.170291938Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:47:18.170382 containerd[1467]: time="2025-01-13T20:47:18.170319259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.170382 containerd[1467]: time="2025-01-13T20:47:18.170351630Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:47:18.170483 containerd[1467]: time="2025-01-13T20:47:18.170377639Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:47:18.170483 containerd[1467]: time="2025-01-13T20:47:18.170403507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:47:18.171257 containerd[1467]: time="2025-01-13T20:47:18.171128357Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:47:18.171393 containerd[1467]: time="2025-01-13T20:47:18.171275663Z" level=info msg="Connect containerd service" Jan 13 20:47:18.171393 containerd[1467]: time="2025-01-13T20:47:18.171345754Z" level=info msg="using legacy CRI server" Jan 13 20:47:18.171393 containerd[1467]: time="2025-01-13T20:47:18.171363888Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:47:18.173011 containerd[1467]: time="2025-01-13T20:47:18.171744652Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:47:18.173729 containerd[1467]: time="2025-01-13T20:47:18.173609500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:47:18.174071 containerd[1467]: time="2025-01-13T20:47:18.174021262Z" level=info msg="Start subscribing containerd event" Jan 13 20:47:18.174152 containerd[1467]: time="2025-01-13T20:47:18.174137821Z" level=info msg="Start recovering state" Jan 13 20:47:18.174287 containerd[1467]: time="2025-01-13T20:47:18.174256604Z" level=info msg="Start event monitor" Jan 13 20:47:18.174357 containerd[1467]: time="2025-01-13T20:47:18.174342805Z" level=info msg="Start snapshots syncer" Jan 13 20:47:18.174416 containerd[1467]: time="2025-01-13T20:47:18.174403659Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:47:18.174514 containerd[1467]: time="2025-01-13T20:47:18.174499659Z" level=info msg="Start streaming server" Jan 13 20:47:18.175161 containerd[1467]: time="2025-01-13T20:47:18.175140141Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:47:18.175331 containerd[1467]: time="2025-01-13T20:47:18.175302215Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:47:18.175674 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:47:18.175797 containerd[1467]: time="2025-01-13T20:47:18.175780422Z" level=info msg="containerd successfully booted in 0.055911s" Jan 13 20:47:19.089346 systemd-networkd[1353]: eth0: Gained IPv6LL Jan 13 20:47:19.090627 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Jan 13 20:47:19.094714 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:47:19.098619 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:47:19.114633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:47:19.131236 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:47:19.194624 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:47:19.392795 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:47:19.407809 systemd[1]: Started sshd@0-172.24.4.85:22-172.24.4.1:55760.service - OpenSSH per-connection server daemon (172.24.4.1:55760). Jan 13 20:47:20.424526 sshd[1543]: Accepted publickey for core from 172.24.4.1 port 55760 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:20.443726 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:20.477106 systemd-logind[1449]: New session 1 of user core. Jan 13 20:47:20.479597 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:47:20.495694 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:47:20.542187 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:47:20.559322 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:47:20.574304 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:47:20.693092 systemd[1547]: Queued start job for default target default.target. Jan 13 20:47:20.699865 systemd[1547]: Created slice app.slice - User Application Slice. Jan 13 20:47:20.699895 systemd[1547]: Reached target paths.target - Paths. Jan 13 20:47:20.699912 systemd[1547]: Reached target timers.target - Timers. Jan 13 20:47:20.703079 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:47:20.730259 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:47:20.730394 systemd[1547]: Reached target sockets.target - Sockets. Jan 13 20:47:20.730412 systemd[1547]: Reached target basic.target - Basic System. Jan 13 20:47:20.730545 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:47:20.732906 systemd[1547]: Reached target default.target - Main User Target. Jan 13 20:47:20.733001 systemd[1547]: Startup finished in 152ms. Jan 13 20:47:20.737240 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:47:21.050538 systemd[1]: Started sshd@1-172.24.4.85:22-172.24.4.1:55772.service - OpenSSH per-connection server daemon (172.24.4.1:55772). Jan 13 20:47:21.165420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:47:21.165713 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:47:22.470144 sshd[1560]: Accepted publickey for core from 172.24.4.1 port 55772 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:22.474403 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:22.487171 systemd-logind[1449]: New session 2 of user core. Jan 13 20:47:22.494538 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:47:22.783141 kubelet[1566]: E0113 20:47:22.782924 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:47:22.787193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:47:22.787552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:47:22.788171 systemd[1]: kubelet.service: Consumed 2.134s CPU time. Jan 13 20:47:23.135564 agetty[1523]: failed to open credentials directory Jan 13 20:47:23.135876 agetty[1525]: failed to open credentials directory Jan 13 20:47:23.170324 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:47:23.171576 login[1525]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:47:23.181487 systemd-logind[1449]: New session 4 of user core. Jan 13 20:47:23.200534 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:47:23.208018 systemd-logind[1449]: New session 3 of user core. Jan 13 20:47:23.212231 sshd[1573]: Connection closed by 172.24.4.1 port 55772 Jan 13 20:47:23.212747 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:23.215487 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:47:23.234818 systemd[1]: sshd@1-172.24.4.85:22-172.24.4.1:55772.service: Deactivated successfully. Jan 13 20:47:23.238333 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:47:23.240362 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:47:23.253343 systemd[1]: Started sshd@2-172.24.4.85:22-172.24.4.1:34492.service - OpenSSH per-connection server daemon (172.24.4.1:34492). Jan 13 20:47:23.260062 systemd-logind[1449]: Removed session 2. Jan 13 20:47:24.386868 sshd[1584]: Accepted publickey for core from 172.24.4.1 port 34492 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:24.389537 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:24.399647 systemd-logind[1449]: New session 5 of user core. Jan 13 20:47:24.413375 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:47:24.671469 coreos-metadata[1439]: Jan 13 20:47:24.671 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:47:24.717946 coreos-metadata[1439]: Jan 13 20:47:24.717 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 20:47:24.889818 coreos-metadata[1439]: Jan 13 20:47:24.889 INFO Fetch successful Jan 13 20:47:24.889818 coreos-metadata[1439]: Jan 13 20:47:24.889 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:47:24.903448 coreos-metadata[1439]: Jan 13 20:47:24.903 INFO Fetch successful Jan 13 20:47:24.903448 coreos-metadata[1439]: Jan 13 20:47:24.903 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 20:47:24.917368 coreos-metadata[1439]: Jan 13 20:47:24.917 INFO Fetch successful Jan 13 20:47:24.917368 coreos-metadata[1439]: Jan 13 20:47:24.917 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 20:47:24.930812 coreos-metadata[1439]: Jan 13 20:47:24.930 INFO Fetch successful Jan 13 20:47:24.930812 coreos-metadata[1439]: Jan 13 20:47:24.930 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 20:47:24.944921 coreos-metadata[1439]: Jan 13 20:47:24.944 INFO Fetch successful Jan 13 20:47:24.944921 coreos-metadata[1439]: Jan 13 20:47:24.944 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 20:47:24.958019 coreos-metadata[1439]: Jan 13 20:47:24.957 INFO Fetch successful Jan 13 20:47:24.989909 sshd[1606]: Connection closed by 172.24.4.1 port 34492 Jan 13 20:47:24.989679 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:25.001578 systemd[1]: sshd@2-172.24.4.85:22-172.24.4.1:34492.service: Deactivated successfully. Jan 13 20:47:25.007522 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:47:25.010253 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:47:25.011499 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:47:25.013332 coreos-metadata[1500]: Jan 13 20:47:25.013 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:47:25.015692 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:47:25.019074 systemd-logind[1449]: Removed session 5. Jan 13 20:47:25.054831 coreos-metadata[1500]: Jan 13 20:47:25.054 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 20:47:25.070570 coreos-metadata[1500]: Jan 13 20:47:25.070 INFO Fetch successful Jan 13 20:47:25.070773 coreos-metadata[1500]: Jan 13 20:47:25.070 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:47:25.083475 coreos-metadata[1500]: Jan 13 20:47:25.083 INFO Fetch successful Jan 13 20:47:25.088584 unknown[1500]: wrote ssh authorized keys file for user: core Jan 13 20:47:25.122381 update-ssh-keys[1619]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:47:25.123296 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:47:25.126740 systemd[1]: Finished sshkeys.service. Jan 13 20:47:25.131585 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:47:25.131898 systemd[1]: Startup finished in 1.254s (kernel) + 14.347s (initrd) + 11.017s (userspace) = 26.619s. Jan 13 20:47:32.822267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:47:32.831465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:47:33.230247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:47:33.244514 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:47:33.317574 kubelet[1631]: E0113 20:47:33.317494 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:47:33.324847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:47:33.325219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:47:35.011649 systemd[1]: Started sshd@3-172.24.4.85:22-172.24.4.1:44546.service - OpenSSH per-connection server daemon (172.24.4.1:44546). Jan 13 20:47:36.381316 sshd[1640]: Accepted publickey for core from 172.24.4.1 port 44546 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:36.384194 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:36.395560 systemd-logind[1449]: New session 6 of user core. Jan 13 20:47:36.406269 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:47:36.964080 sshd[1642]: Connection closed by 172.24.4.1 port 44546 Jan 13 20:47:36.964073 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:36.975761 systemd[1]: sshd@3-172.24.4.85:22-172.24.4.1:44546.service: Deactivated successfully. Jan 13 20:47:36.979318 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:47:36.984416 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:47:36.991533 systemd[1]: Started sshd@4-172.24.4.85:22-172.24.4.1:44554.service - OpenSSH per-connection server daemon (172.24.4.1:44554). Jan 13 20:47:36.995322 systemd-logind[1449]: Removed session 6. Jan 13 20:47:38.229413 sshd[1647]: Accepted publickey for core from 172.24.4.1 port 44554 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:38.240630 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:38.249992 systemd-logind[1449]: New session 7 of user core. Jan 13 20:47:38.262325 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:47:38.945392 sshd[1649]: Connection closed by 172.24.4.1 port 44554 Jan 13 20:47:38.946394 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:38.957419 systemd[1]: sshd@4-172.24.4.85:22-172.24.4.1:44554.service: Deactivated successfully. Jan 13 20:47:38.960362 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:47:38.963424 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:47:38.970523 systemd[1]: Started sshd@5-172.24.4.85:22-172.24.4.1:44570.service - OpenSSH per-connection server daemon (172.24.4.1:44570). Jan 13 20:47:38.973355 systemd-logind[1449]: Removed session 7. Jan 13 20:47:40.380171 sshd[1654]: Accepted publickey for core from 172.24.4.1 port 44570 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:40.382774 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:40.393378 systemd-logind[1449]: New session 8 of user core. Jan 13 20:47:40.403262 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:47:41.070005 sshd[1656]: Connection closed by 172.24.4.1 port 44570 Jan 13 20:47:41.069132 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:41.079543 systemd[1]: sshd@5-172.24.4.85:22-172.24.4.1:44570.service: Deactivated successfully. Jan 13 20:47:41.082675 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:47:41.084553 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:47:41.097577 systemd[1]: Started sshd@6-172.24.4.85:22-172.24.4.1:44586.service - OpenSSH per-connection server daemon (172.24.4.1:44586). Jan 13 20:47:41.100443 systemd-logind[1449]: Removed session 8. Jan 13 20:47:42.723690 sshd[1661]: Accepted publickey for core from 172.24.4.1 port 44586 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:42.726401 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:42.737040 systemd-logind[1449]: New session 9 of user core. Jan 13 20:47:42.744265 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:47:43.212363 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:47:43.213707 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:47:43.236246 sudo[1664]: pam_unix(sudo:session): session closed for user root Jan 13 20:47:43.511115 sshd[1663]: Connection closed by 172.24.4.1 port 44586 Jan 13 20:47:43.513623 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:43.527865 systemd[1]: sshd@6-172.24.4.85:22-172.24.4.1:44586.service: Deactivated successfully. Jan 13 20:47:43.532250 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:47:43.535475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:47:43.537707 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:47:43.546592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:47:43.557571 systemd[1]: Started sshd@7-172.24.4.85:22-172.24.4.1:37722.service - OpenSSH per-connection server daemon (172.24.4.1:37722). Jan 13 20:47:43.571953 systemd-logind[1449]: Removed session 9. Jan 13 20:47:43.873676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:47:43.890471 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:47:43.977455 kubelet[1679]: E0113 20:47:43.977340 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:47:43.981146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:47:43.981426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:47:44.950923 sshd[1670]: Accepted publickey for core from 172.24.4.1 port 37722 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:44.953718 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:44.964898 systemd-logind[1449]: New session 10 of user core. Jan 13 20:47:44.971261 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:47:45.376548 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:47:45.377271 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:47:45.384431 sudo[1688]: pam_unix(sudo:session): session closed for user root Jan 13 20:47:45.395652 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:47:45.396351 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:47:45.435112 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:47:45.489877 augenrules[1710]: No rules Jan 13 20:47:45.491033 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:47:45.491389 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:47:45.494187 sudo[1687]: pam_unix(sudo:session): session closed for user root Jan 13 20:47:45.683945 sshd[1686]: Connection closed by 172.24.4.1 port 37722 Jan 13 20:47:45.683699 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:45.693607 systemd[1]: sshd@7-172.24.4.85:22-172.24.4.1:37722.service: Deactivated successfully. Jan 13 20:47:45.696545 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:47:45.700354 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:47:45.705537 systemd[1]: Started sshd@8-172.24.4.85:22-172.24.4.1:37726.service - OpenSSH per-connection server daemon (172.24.4.1:37726). Jan 13 20:47:45.708751 systemd-logind[1449]: Removed session 10. Jan 13 20:47:47.250796 sshd[1718]: Accepted publickey for core from 172.24.4.1 port 37726 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:47:47.253527 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:47.264293 systemd-logind[1449]: New session 11 of user core. Jan 13 20:47:47.274286 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:47:47.676144 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:47:47.676791 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:47:48.995232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:47:49.010585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:47:49.061401 systemd[1]: Reloading requested from client PID 1753 ('systemctl') (unit session-11.scope)... Jan 13 20:47:49.061418 systemd[1]: Reloading... Jan 13 20:47:49.150637 zram_generator::config[1791]: No configuration found. Jan 13 20:47:49.345305 systemd-timesyncd[1373]: Contacted time server 82.67.126.242:123 (2.flatcar.pool.ntp.org). Jan 13 20:47:49.345399 systemd-timesyncd[1373]: Initial clock synchronization to Mon 2025-01-13 20:47:49.632423 UTC. Jan 13 20:47:49.465226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:47:49.546215 systemd[1]: Reloading finished in 484 ms. Jan 13 20:47:49.601907 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:47:49.601993 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:47:49.602310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:47:49.604344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:47:49.710758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:47:49.721385 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:47:49.960002 kubelet[1856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:47:49.960002 kubelet[1856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:47:49.960002 kubelet[1856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:47:49.960659 kubelet[1856]: I0113 20:47:49.960033 1856 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:47:50.871409 kubelet[1856]: I0113 20:47:50.871321 1856 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:47:50.871409 kubelet[1856]: I0113 20:47:50.871402 1856 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:47:50.872207 kubelet[1856]: I0113 20:47:50.872162 1856 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:47:50.913742 kubelet[1856]: I0113 20:47:50.913711 1856 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:47:50.928042 kubelet[1856]: E0113 20:47:50.927870 1856 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:47:50.928042 kubelet[1856]: I0113 20:47:50.927929 1856 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:47:50.932881 kubelet[1856]: I0113 20:47:50.932863 1856 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:47:50.933850 kubelet[1856]: I0113 20:47:50.933029 1856 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:47:50.933850 kubelet[1856]: I0113 20:47:50.933212 1856 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:47:50.933850 kubelet[1856]: I0113 20:47:50.933244 1856 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.85","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:47:50.933850 kubelet[1856]: I0113 20:47:50.933487 1856 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:47:50.934072 kubelet[1856]: I0113 20:47:50.933500 1856 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:47:50.934072 kubelet[1856]: I0113 20:47:50.933612 1856 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:47:50.937236 kubelet[1856]: I0113 20:47:50.937202 1856 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:47:50.937316 kubelet[1856]: I0113 20:47:50.937305 1856 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:47:50.937405 kubelet[1856]: I0113 20:47:50.937395 1856 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:47:50.937474 kubelet[1856]: I0113 20:47:50.937464 1856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:47:50.944563 kubelet[1856]: E0113 20:47:50.944542 1856 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:50.944705 kubelet[1856]: E0113 20:47:50.944685 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:50.946034 kubelet[1856]: I0113 20:47:50.946016 1856 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:47:50.948365 kubelet[1856]: I0113 20:47:50.948349 1856 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:47:50.949509 kubelet[1856]: W0113 20:47:50.949487 1856 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:47:50.950458 kubelet[1856]: I0113 20:47:50.950441 1856 server.go:1269] "Started kubelet" Jan 13 20:47:50.952453 kubelet[1856]: I0113 20:47:50.952301 1856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:47:50.955135 kubelet[1856]: I0113 20:47:50.954532 1856 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:47:50.958034 kubelet[1856]: I0113 20:47:50.957967 1856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:47:50.958361 kubelet[1856]: I0113 20:47:50.958346 1856 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:47:50.959543 kubelet[1856]: I0113 20:47:50.959517 1856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:47:50.961699 kubelet[1856]: I0113 20:47:50.961500 1856 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:47:50.964850 kubelet[1856]: E0113 20:47:50.964732 1856 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:47:50.964850 kubelet[1856]: I0113 20:47:50.965109 1856 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:47:50.964850 kubelet[1856]: I0113 20:47:50.965232 1856 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:47:50.964850 kubelet[1856]: I0113 20:47:50.965278 1856 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:47:50.971367 kubelet[1856]: E0113 20:47:50.967020 1856 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.85.181a5b8dd0724697 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.85,UID:172.24.4.85,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.85,},FirstTimestamp:2025-01-13 20:47:50.950413975 +0000 UTC m=+1.225038289,LastTimestamp:2025-01-13 20:47:50.950413975 +0000 UTC m=+1.225038289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.85,}" Jan 13 20:47:50.971785 kubelet[1856]: W0113 20:47:50.971765 1856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.85" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:47:50.972105 kubelet[1856]: E0113 20:47:50.971919 1856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.85\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:47:50.972105 kubelet[1856]: W0113 20:47:50.971971 1856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:47:50.972105 kubelet[1856]: E0113 20:47:50.971985 1856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:47:50.973543 kubelet[1856]: E0113 20:47:50.973526 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:50.974666 kubelet[1856]: E0113 20:47:50.974087 1856 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.85.181a5b8dd14c92c0 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.85,UID:172.24.4.85,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.85,},FirstTimestamp:2025-01-13 20:47:50.96472032 +0000 UTC m=+1.239344634,LastTimestamp:2025-01-13 20:47:50.96472032 +0000 UTC m=+1.239344634,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.85,}" Jan 13 20:47:50.975539 kubelet[1856]: I0113 20:47:50.975494 1856 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:47:50.975539 kubelet[1856]: I0113 20:47:50.975513 1856 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:47:50.976078 kubelet[1856]: I0113 20:47:50.976038 1856 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:47:50.990935 kubelet[1856]: E0113 20:47:50.989416 1856 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.85\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:47:50.990935 kubelet[1856]: W0113 20:47:50.989662 1856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:47:50.990935 kubelet[1856]: E0113 20:47:50.989709 1856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 13 20:47:51.002829 kubelet[1856]: E0113 20:47:51.002611 1856 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.85.181a5b8dd374f9a1 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.85,UID:172.24.4.85,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.24.4.85 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.24.4.85,},FirstTimestamp:2025-01-13 20:47:51.000922529 +0000 UTC m=+1.275546885,LastTimestamp:2025-01-13 20:47:51.000922529 +0000 UTC m=+1.275546885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.85,}" Jan 13 20:47:51.003934 kubelet[1856]: I0113 20:47:51.003906 1856 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:47:51.004115 kubelet[1856]: I0113 20:47:51.004093 1856 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:47:51.004256 kubelet[1856]: I0113 20:47:51.004238 1856 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:47:51.010273 kubelet[1856]: I0113 20:47:51.010243 1856 policy_none.go:49] "None policy: Start" Jan 13 20:47:51.011269 kubelet[1856]: I0113 20:47:51.011245 1856 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:47:51.011884 kubelet[1856]: I0113 20:47:51.011858 1856 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:47:51.028051 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:47:51.036940 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:47:51.044446 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:47:51.055738 kubelet[1856]: I0113 20:47:51.055273 1856 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:47:51.055738 kubelet[1856]: I0113 20:47:51.055468 1856 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:47:51.055738 kubelet[1856]: I0113 20:47:51.055489 1856 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:47:51.055738 kubelet[1856]: I0113 20:47:51.055672 1856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:47:51.059273 kubelet[1856]: E0113 20:47:51.058295 1856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.85\" not found" Jan 13 20:47:51.073233 kubelet[1856]: I0113 20:47:51.073179 1856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:47:51.075021 kubelet[1856]: I0113 20:47:51.074669 1856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:47:51.075021 kubelet[1856]: I0113 20:47:51.074719 1856 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:47:51.075021 kubelet[1856]: I0113 20:47:51.074743 1856 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:47:51.075021 kubelet[1856]: E0113 20:47:51.074787 1856 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:47:51.156944 kubelet[1856]: I0113 20:47:51.156743 1856 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.85" Jan 13 20:47:51.164265 kubelet[1856]: I0113 20:47:51.163940 1856 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.85" Jan 13 20:47:51.164265 kubelet[1856]: E0113 20:47:51.164039 1856 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.85\": node \"172.24.4.85\" not found" Jan 13 20:47:51.185197 kubelet[1856]: E0113 20:47:51.185044 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.265469 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 13 20:47:51.285642 kubelet[1856]: E0113 20:47:51.285569 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.386932 kubelet[1856]: E0113 20:47:51.386822 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.473612 sshd[1720]: Connection closed by 172.24.4.1 port 37726 Jan 13 20:47:51.475164 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:51.482130 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:47:51.482435 systemd[1]: sshd@8-172.24.4.85:22-172.24.4.1:37726.service: Deactivated successfully. Jan 13 20:47:51.486103 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:47:51.486724 systemd[1]: session-11.scope: Consumed 1.057s CPU time, 75.1M memory peak, 0B memory swap peak. Jan 13 20:47:51.487194 kubelet[1856]: E0113 20:47:51.487122 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.491657 systemd-logind[1449]: Removed session 11. Jan 13 20:47:51.588053 kubelet[1856]: E0113 20:47:51.587882 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.688389 kubelet[1856]: E0113 20:47:51.688324 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.789138 kubelet[1856]: E0113 20:47:51.788804 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.876843 kubelet[1856]: I0113 20:47:51.876461 1856 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:47:51.876843 kubelet[1856]: W0113 20:47:51.876788 1856 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:47:51.890085 kubelet[1856]: E0113 20:47:51.890023 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:51.945707 kubelet[1856]: E0113 20:47:51.945637 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:51.990828 kubelet[1856]: E0113 20:47:51.990726 1856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.85\" not found" Jan 13 20:47:52.093162 kubelet[1856]: I0113 20:47:52.092946 1856 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:47:52.093953 containerd[1467]: time="2025-01-13T20:47:52.093571375Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:47:52.095293 kubelet[1856]: I0113 20:47:52.094026 1856 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:47:52.946493 kubelet[1856]: E0113 20:47:52.946399 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:52.947316 kubelet[1856]: I0113 20:47:52.946801 1856 apiserver.go:52] "Watching apiserver" Jan 13 20:47:52.957051 kubelet[1856]: E0113 20:47:52.955566 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:47:52.966714 kubelet[1856]: I0113 20:47:52.966251 1856 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:47:52.978761 kubelet[1856]: I0113 20:47:52.978567 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc97343e-ca12-48a4-9070-f4af7dece8a8-tigera-ca-bundle\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.978761 kubelet[1856]: I0113 20:47:52.978651 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-var-lib-calico\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.978761 kubelet[1856]: I0113 20:47:52.978707 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-cni-bin-dir\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.978761 kubelet[1856]: I0113 20:47:52.978750 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d32a904-00d3-418b-b6e0-17302fc19462-registration-dir\") pod \"csi-node-driver-jl89x\" (UID: \"1d32a904-00d3-418b-b6e0-17302fc19462\") " pod="calico-system/csi-node-driver-jl89x" Jan 13 20:47:52.979186 kubelet[1856]: I0113 20:47:52.978797 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc97343e-ca12-48a4-9070-f4af7dece8a8-node-certs\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.979186 kubelet[1856]: I0113 20:47:52.978881 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-flexvol-driver-host\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.979186 kubelet[1856]: I0113 20:47:52.978923 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwl29\" (UniqueName: \"kubernetes.io/projected/fc97343e-ca12-48a4-9070-f4af7dece8a8-kube-api-access-nwl29\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.979186 kubelet[1856]: I0113 20:47:52.978963 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d32a904-00d3-418b-b6e0-17302fc19462-kubelet-dir\") pod \"csi-node-driver-jl89x\" (UID: \"1d32a904-00d3-418b-b6e0-17302fc19462\") " pod="calico-system/csi-node-driver-jl89x" Jan 13 20:47:52.980612 systemd[1]: Created slice kubepods-besteffort-pod7051f801_5535_4b14_827b_7b79485c5696.slice - libcontainer container kubepods-besteffort-pod7051f801_5535_4b14_827b_7b79485c5696.slice. Jan 13 20:47:52.982504 kubelet[1856]: I0113 20:47:52.981093 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d32a904-00d3-418b-b6e0-17302fc19462-socket-dir\") pod \"csi-node-driver-jl89x\" (UID: \"1d32a904-00d3-418b-b6e0-17302fc19462\") " pod="calico-system/csi-node-driver-jl89x" Jan 13 20:47:52.982504 kubelet[1856]: I0113 20:47:52.981208 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txcpk\" (UniqueName: \"kubernetes.io/projected/7051f801-5535-4b14-827b-7b79485c5696-kube-api-access-txcpk\") pod \"kube-proxy-kbwqw\" (UID: \"7051f801-5535-4b14-827b-7b79485c5696\") " pod="kube-system/kube-proxy-kbwqw" Jan 13 20:47:52.982504 kubelet[1856]: I0113 20:47:52.981471 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-xtables-lock\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.982504 kubelet[1856]: I0113 20:47:52.981539 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-policysync\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.982504 kubelet[1856]: I0113 20:47:52.981579 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-cni-log-dir\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.982824 kubelet[1856]: I0113 20:47:52.981621 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7051f801-5535-4b14-827b-7b79485c5696-xtables-lock\") pod \"kube-proxy-kbwqw\" (UID: \"7051f801-5535-4b14-827b-7b79485c5696\") " pod="kube-system/kube-proxy-kbwqw" Jan 13 20:47:52.982824 kubelet[1856]: I0113 20:47:52.981669 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-lib-modules\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.982824 kubelet[1856]: I0113 20:47:52.981755 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-var-run-calico\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.982824 kubelet[1856]: I0113 20:47:52.981798 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc97343e-ca12-48a4-9070-f4af7dece8a8-cni-net-dir\") pod \"calico-node-72h9d\" (UID: \"fc97343e-ca12-48a4-9070-f4af7dece8a8\") " pod="calico-system/calico-node-72h9d" Jan 13 20:47:52.982824 kubelet[1856]: I0113 20:47:52.981844 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1d32a904-00d3-418b-b6e0-17302fc19462-varrun\") pod \"csi-node-driver-jl89x\" (UID: \"1d32a904-00d3-418b-b6e0-17302fc19462\") " pod="calico-system/csi-node-driver-jl89x" Jan 13 20:47:52.983173 kubelet[1856]: I0113 20:47:52.981888 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbbm\" (UniqueName: \"kubernetes.io/projected/1d32a904-00d3-418b-b6e0-17302fc19462-kube-api-access-rxbbm\") pod \"csi-node-driver-jl89x\" (UID: \"1d32a904-00d3-418b-b6e0-17302fc19462\") " pod="calico-system/csi-node-driver-jl89x" Jan 13 20:47:52.983173 kubelet[1856]: I0113 20:47:52.981928 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7051f801-5535-4b14-827b-7b79485c5696-kube-proxy\") pod \"kube-proxy-kbwqw\" (UID: \"7051f801-5535-4b14-827b-7b79485c5696\") " pod="kube-system/kube-proxy-kbwqw" Jan 13 20:47:52.983173 kubelet[1856]: I0113 20:47:52.981968 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7051f801-5535-4b14-827b-7b79485c5696-lib-modules\") pod \"kube-proxy-kbwqw\" (UID: \"7051f801-5535-4b14-827b-7b79485c5696\") " pod="kube-system/kube-proxy-kbwqw" Jan 13 20:47:53.004946 systemd[1]: Created slice kubepods-besteffort-podfc97343e_ca12_48a4_9070_f4af7dece8a8.slice - libcontainer container kubepods-besteffort-podfc97343e_ca12_48a4_9070_f4af7dece8a8.slice. Jan 13 20:47:53.087398 kubelet[1856]: E0113 20:47:53.087284 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.088482 kubelet[1856]: W0113 20:47:53.087327 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.088837 kubelet[1856]: E0113 20:47:53.088120 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.092045 kubelet[1856]: E0113 20:47:53.090394 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.092045 kubelet[1856]: W0113 20:47:53.090432 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.094076 kubelet[1856]: E0113 20:47:53.092781 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.094076 kubelet[1856]: W0113 20:47:53.092852 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.094076 kubelet[1856]: E0113 20:47:53.093061 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.094076 kubelet[1856]: E0113 20:47:53.093118 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.096255 kubelet[1856]: E0113 20:47:53.094928 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.096255 kubelet[1856]: W0113 20:47:53.096066 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.096255 kubelet[1856]: E0113 20:47:53.096168 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.099013 kubelet[1856]: E0113 20:47:53.096873 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.099013 kubelet[1856]: W0113 20:47:53.096901 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.099013 kubelet[1856]: E0113 20:47:53.096957 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.099778 kubelet[1856]: E0113 20:47:53.099580 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.099778 kubelet[1856]: W0113 20:47:53.099607 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.099778 kubelet[1856]: E0113 20:47:53.099682 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.102368 kubelet[1856]: E0113 20:47:53.102152 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.102368 kubelet[1856]: W0113 20:47:53.102181 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.102368 kubelet[1856]: E0113 20:47:53.102247 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.103121 kubelet[1856]: E0113 20:47:53.102891 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.103121 kubelet[1856]: W0113 20:47:53.102955 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.103121 kubelet[1856]: E0113 20:47:53.103078 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.104360 kubelet[1856]: E0113 20:47:53.104180 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.104360 kubelet[1856]: W0113 20:47:53.104207 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.104360 kubelet[1856]: E0113 20:47:53.104309 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.105254 kubelet[1856]: E0113 20:47:53.105004 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.105254 kubelet[1856]: W0113 20:47:53.105031 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.105254 kubelet[1856]: E0113 20:47:53.105214 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.106065 kubelet[1856]: E0113 20:47:53.105768 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.106065 kubelet[1856]: W0113 20:47:53.105804 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.106468 kubelet[1856]: E0113 20:47:53.105884 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.106682 kubelet[1856]: E0113 20:47:53.106406 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.107069 kubelet[1856]: W0113 20:47:53.106816 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.107069 kubelet[1856]: E0113 20:47:53.106915 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.107556 kubelet[1856]: E0113 20:47:53.107422 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.107556 kubelet[1856]: W0113 20:47:53.107449 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.107556 kubelet[1856]: E0113 20:47:53.107531 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.109600 kubelet[1856]: E0113 20:47:53.109380 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.109600 kubelet[1856]: W0113 20:47:53.109406 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.110268 kubelet[1856]: E0113 20:47:53.110061 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.110268 kubelet[1856]: W0113 20:47:53.110090 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.110718 kubelet[1856]: E0113 20:47:53.110690 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.111179 kubelet[1856]: W0113 20:47:53.110848 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.111179 kubelet[1856]: E0113 20:47:53.110886 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.111179 kubelet[1856]: E0113 20:47:53.110934 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.111619 kubelet[1856]: E0113 20:47:53.111591 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.111809 kubelet[1856]: W0113 20:47:53.111748 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.112026 kubelet[1856]: E0113 20:47:53.111957 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.116751 kubelet[1856]: E0113 20:47:53.116275 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.116751 kubelet[1856]: W0113 20:47:53.116307 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.116751 kubelet[1856]: E0113 20:47:53.116334 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.116751 kubelet[1856]: E0113 20:47:53.116378 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.124897 kubelet[1856]: E0113 20:47:53.124835 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.125159 kubelet[1856]: W0113 20:47:53.125125 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.125410 kubelet[1856]: E0113 20:47:53.125345 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.139668 kubelet[1856]: E0113 20:47:53.139590 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.139668 kubelet[1856]: W0113 20:47:53.139638 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.139668 kubelet[1856]: E0113 20:47:53.139678 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.148555 kubelet[1856]: E0113 20:47:53.148464 1856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:47:53.148555 kubelet[1856]: W0113 20:47:53.148497 1856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:47:53.148555 kubelet[1856]: E0113 20:47:53.148516 1856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:47:53.297573 containerd[1467]: time="2025-01-13T20:47:53.297336154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbwqw,Uid:7051f801-5535-4b14-827b-7b79485c5696,Namespace:kube-system,Attempt:0,}" Jan 13 20:47:53.312545 containerd[1467]: time="2025-01-13T20:47:53.312433267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-72h9d,Uid:fc97343e-ca12-48a4-9070-f4af7dece8a8,Namespace:calico-system,Attempt:0,}" Jan 13 20:47:53.947726 kubelet[1856]: E0113 20:47:53.947626 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:54.034786 containerd[1467]: time="2025-01-13T20:47:54.034616654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:47:54.038473 containerd[1467]: time="2025-01-13T20:47:54.038385282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 20:47:54.041687 containerd[1467]: time="2025-01-13T20:47:54.041613010Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:47:54.046044 containerd[1467]: time="2025-01-13T20:47:54.043699925Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:47:54.046333 containerd[1467]: time="2025-01-13T20:47:54.046272488Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:47:54.053077 containerd[1467]: time="2025-01-13T20:47:54.052963516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:47:54.058350 containerd[1467]: time="2025-01-13T20:47:54.058282323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 745.647336ms" Jan 13 20:47:54.063127 containerd[1467]: time="2025-01-13T20:47:54.063040650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.494308ms" Jan 13 20:47:54.100959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125959949.mount: Deactivated successfully. Jan 13 20:47:54.256927 containerd[1467]: time="2025-01-13T20:47:54.256082054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:47:54.256927 containerd[1467]: time="2025-01-13T20:47:54.256234580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:47:54.256927 containerd[1467]: time="2025-01-13T20:47:54.256295898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:54.256927 containerd[1467]: time="2025-01-13T20:47:54.256483931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:54.267236 containerd[1467]: time="2025-01-13T20:47:54.263537145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:47:54.267236 containerd[1467]: time="2025-01-13T20:47:54.265466981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:47:54.267236 containerd[1467]: time="2025-01-13T20:47:54.265517680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:54.267236 containerd[1467]: time="2025-01-13T20:47:54.265666993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:54.391247 systemd[1]: Started cri-containerd-49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a.scope - libcontainer container 49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a. Jan 13 20:47:54.396154 systemd[1]: Started cri-containerd-93503e08190b292abcaf69dec9070e5a92d8874332ad45652d49c37ac2cb9418.scope - libcontainer container 93503e08190b292abcaf69dec9070e5a92d8874332ad45652d49c37ac2cb9418. Jan 13 20:47:54.434613 containerd[1467]: time="2025-01-13T20:47:54.434459601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-72h9d,Uid:fc97343e-ca12-48a4-9070-f4af7dece8a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a\"" Jan 13 20:47:54.437499 containerd[1467]: time="2025-01-13T20:47:54.437190194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:47:54.440338 containerd[1467]: time="2025-01-13T20:47:54.440313660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbwqw,Uid:7051f801-5535-4b14-827b-7b79485c5696,Namespace:kube-system,Attempt:0,} returns sandbox id \"93503e08190b292abcaf69dec9070e5a92d8874332ad45652d49c37ac2cb9418\"" Jan 13 20:47:54.948533 kubelet[1856]: E0113 20:47:54.948443 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:55.076791 kubelet[1856]: E0113 20:47:55.076656 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:47:55.897290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266748910.mount: Deactivated successfully. Jan 13 20:47:55.949393 kubelet[1856]: E0113 20:47:55.949317 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:56.070063 containerd[1467]: time="2025-01-13T20:47:56.070019716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:56.071537 containerd[1467]: time="2025-01-13T20:47:56.071386516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 20:47:56.073012 containerd[1467]: time="2025-01-13T20:47:56.072482194Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:56.075532 containerd[1467]: time="2025-01-13T20:47:56.075358512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:56.075994 containerd[1467]: time="2025-01-13T20:47:56.075951709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.638723911s" Jan 13 20:47:56.076045 containerd[1467]: time="2025-01-13T20:47:56.075995892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:47:56.078012 containerd[1467]: time="2025-01-13T20:47:56.077567093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:47:56.079213 containerd[1467]: time="2025-01-13T20:47:56.079175818Z" level=info msg="CreateContainer within sandbox \"49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:47:56.103500 containerd[1467]: time="2025-01-13T20:47:56.103456480Z" level=info msg="CreateContainer within sandbox \"49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7\"" Jan 13 20:47:56.104383 containerd[1467]: time="2025-01-13T20:47:56.104356461Z" level=info msg="StartContainer for \"86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7\"" Jan 13 20:47:56.140172 systemd[1]: Started cri-containerd-86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7.scope - libcontainer container 86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7. Jan 13 20:47:56.182576 containerd[1467]: time="2025-01-13T20:47:56.182399504Z" level=info msg="StartContainer for \"86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7\" returns successfully" Jan 13 20:47:56.201071 systemd[1]: cri-containerd-86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7.scope: Deactivated successfully. Jan 13 20:47:56.334739 containerd[1467]: time="2025-01-13T20:47:56.334606306Z" level=info msg="shim disconnected" id=86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7 namespace=k8s.io Jan 13 20:47:56.334739 containerd[1467]: time="2025-01-13T20:47:56.334705065Z" level=warning msg="cleaning up after shim disconnected" id=86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7 namespace=k8s.io Jan 13 20:47:56.334739 containerd[1467]: time="2025-01-13T20:47:56.334726576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:47:56.841542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86bcc11c332105c79a7b03759c9019d78942ca574cfe397672e8c41e6b43bfe7-rootfs.mount: Deactivated successfully. Jan 13 20:47:56.949412 kubelet[1856]: E0113 20:47:56.949381 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:57.075949 kubelet[1856]: E0113 20:47:57.075633 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:47:57.512857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128354079.mount: Deactivated successfully. Jan 13 20:47:57.950050 kubelet[1856]: E0113 20:47:57.949916 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:58.332779 containerd[1467]: time="2025-01-13T20:47:58.332028296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:58.333305 containerd[1467]: time="2025-01-13T20:47:58.333284471Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Jan 13 20:47:58.334910 containerd[1467]: time="2025-01-13T20:47:58.334889394Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:58.339181 containerd[1467]: time="2025-01-13T20:47:58.339118497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:58.340190 containerd[1467]: time="2025-01-13T20:47:58.340162319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.262543338s" Jan 13 20:47:58.340353 containerd[1467]: time="2025-01-13T20:47:58.340281107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:47:58.342574 containerd[1467]: time="2025-01-13T20:47:58.342382728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:47:58.343193 containerd[1467]: time="2025-01-13T20:47:58.343171050Z" level=info msg="CreateContainer within sandbox \"93503e08190b292abcaf69dec9070e5a92d8874332ad45652d49c37ac2cb9418\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:47:58.375860 containerd[1467]: time="2025-01-13T20:47:58.375825404Z" level=info msg="CreateContainer within sandbox \"93503e08190b292abcaf69dec9070e5a92d8874332ad45652d49c37ac2cb9418\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd1ceb50d52a1c25b15d63bc2fcfecff21965ad0c7db62a49b15c6ac849396d2\"" Jan 13 20:47:58.376868 containerd[1467]: time="2025-01-13T20:47:58.376807286Z" level=info msg="StartContainer for \"fd1ceb50d52a1c25b15d63bc2fcfecff21965ad0c7db62a49b15c6ac849396d2\"" Jan 13 20:47:58.429253 systemd[1]: Started cri-containerd-fd1ceb50d52a1c25b15d63bc2fcfecff21965ad0c7db62a49b15c6ac849396d2.scope - libcontainer container fd1ceb50d52a1c25b15d63bc2fcfecff21965ad0c7db62a49b15c6ac849396d2. Jan 13 20:47:58.464106 containerd[1467]: time="2025-01-13T20:47:58.463915302Z" level=info msg="StartContainer for \"fd1ceb50d52a1c25b15d63bc2fcfecff21965ad0c7db62a49b15c6ac849396d2\" returns successfully" Jan 13 20:47:58.950197 kubelet[1856]: E0113 20:47:58.950095 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:59.077489 kubelet[1856]: E0113 20:47:59.076681 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:47:59.147478 kubelet[1856]: I0113 20:47:59.147230 1856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbwqw" podStartSLOduration=4.24727167 podStartE2EDuration="8.147197479s" podCreationTimestamp="2025-01-13 20:47:51 +0000 UTC" firstStartedPulling="2025-01-13 20:47:54.4416711 +0000 UTC m=+4.716295403" lastFinishedPulling="2025-01-13 20:47:58.34159691 +0000 UTC m=+8.616221212" observedRunningTime="2025-01-13 20:47:59.146032224 +0000 UTC m=+9.420656588" watchObservedRunningTime="2025-01-13 20:47:59.147197479 +0000 UTC m=+9.421821832" Jan 13 20:47:59.361817 systemd[1]: run-containerd-runc-k8s.io-fd1ceb50d52a1c25b15d63bc2fcfecff21965ad0c7db62a49b15c6ac849396d2-runc.MOgwhN.mount: Deactivated successfully. Jan 13 20:47:59.950889 kubelet[1856]: E0113 20:47:59.950749 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:00.951615 kubelet[1856]: E0113 20:48:00.951555 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:01.079542 kubelet[1856]: E0113 20:48:01.078554 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:01.952467 kubelet[1856]: E0113 20:48:01.952387 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:02.513883 update_engine[1452]: I20250113 20:48:02.513756 1452 update_attempter.cc:509] Updating boot flags... Jan 13 20:48:02.566095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2269) Jan 13 20:48:02.640000 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2273) Jan 13 20:48:02.714998 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2273) Jan 13 20:48:02.953527 kubelet[1856]: E0113 20:48:02.953494 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:03.076479 kubelet[1856]: E0113 20:48:03.075677 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:03.954939 kubelet[1856]: E0113 20:48:03.954893 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:04.307198 containerd[1467]: time="2025-01-13T20:48:04.306300389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:04.308572 containerd[1467]: time="2025-01-13T20:48:04.308542715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:48:04.309793 containerd[1467]: time="2025-01-13T20:48:04.309770862Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:04.312378 containerd[1467]: time="2025-01-13T20:48:04.312358314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:04.313911 containerd[1467]: time="2025-01-13T20:48:04.313888267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.971473114s" Jan 13 20:48:04.314008 containerd[1467]: time="2025-01-13T20:48:04.313991801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:48:04.315941 containerd[1467]: time="2025-01-13T20:48:04.315919496Z" level=info msg="CreateContainer within sandbox \"49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:48:04.346404 containerd[1467]: time="2025-01-13T20:48:04.346345921Z" level=info msg="CreateContainer within sandbox \"49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9\"" Jan 13 20:48:04.347035 containerd[1467]: time="2025-01-13T20:48:04.346944828Z" level=info msg="StartContainer for \"8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9\"" Jan 13 20:48:04.386112 systemd[1]: Started cri-containerd-8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9.scope - libcontainer container 8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9. Jan 13 20:48:04.422994 containerd[1467]: time="2025-01-13T20:48:04.422560780Z" level=info msg="StartContainer for \"8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9\" returns successfully" Jan 13 20:48:04.955895 kubelet[1856]: E0113 20:48:04.955748 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:05.075900 kubelet[1856]: E0113 20:48:05.075817 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:05.956691 kubelet[1856]: E0113 20:48:05.956206 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:05.979600 containerd[1467]: time="2025-01-13T20:48:05.979233654Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:48:05.983738 systemd[1]: cri-containerd-8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9.scope: Deactivated successfully. Jan 13 20:48:06.032076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9-rootfs.mount: Deactivated successfully. Jan 13 20:48:06.059919 kubelet[1856]: I0113 20:48:06.059730 1856 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:48:06.957568 kubelet[1856]: E0113 20:48:06.957474 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:07.089048 systemd[1]: Created slice kubepods-besteffort-pod1d32a904_00d3_418b_b6e0_17302fc19462.slice - libcontainer container kubepods-besteffort-pod1d32a904_00d3_418b_b6e0_17302fc19462.slice. Jan 13 20:48:07.096272 containerd[1467]: time="2025-01-13T20:48:07.096162805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:0,}" Jan 13 20:48:07.332270 containerd[1467]: time="2025-01-13T20:48:07.331846352Z" level=info msg="shim disconnected" id=8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9 namespace=k8s.io Jan 13 20:48:07.332270 containerd[1467]: time="2025-01-13T20:48:07.331946388Z" level=warning msg="cleaning up after shim disconnected" id=8f56c5a71cbc12b06dbcd10254e8f3d8cd63844261e7bb6253abc203207dd2e9 namespace=k8s.io Jan 13 20:48:07.332270 containerd[1467]: time="2025-01-13T20:48:07.332010153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:07.452652 containerd[1467]: time="2025-01-13T20:48:07.452563872Z" level=error msg="Failed to destroy network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:07.455171 containerd[1467]: time="2025-01-13T20:48:07.454761978Z" level=error msg="encountered an error cleaning up failed sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:07.455171 containerd[1467]: time="2025-01-13T20:48:07.454823530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:07.454322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e-shm.mount: Deactivated successfully. Jan 13 20:48:07.456473 kubelet[1856]: E0113 20:48:07.455658 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:07.456473 kubelet[1856]: E0113 20:48:07.455802 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:07.456473 kubelet[1856]: E0113 20:48:07.455887 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:07.457209 kubelet[1856]: E0113 20:48:07.456311 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:07.958642 kubelet[1856]: E0113 20:48:07.958564 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:08.176018 kubelet[1856]: I0113 20:48:08.174914 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e" Jan 13 20:48:08.176660 containerd[1467]: time="2025-01-13T20:48:08.176603809Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:08.178502 containerd[1467]: time="2025-01-13T20:48:08.178396764Z" level=info msg="Ensure that sandbox 0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e in task-service has been cleanup successfully" Jan 13 20:48:08.182125 containerd[1467]: time="2025-01-13T20:48:08.177556203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:48:08.182409 containerd[1467]: time="2025-01-13T20:48:08.182365460Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:08.183647 containerd[1467]: time="2025-01-13T20:48:08.183598376Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:08.184543 systemd[1]: run-netns-cni\x2d74cd44d9\x2ded7e\x2d8a54\x2d1f72\x2d2305ec509012.mount: Deactivated successfully. Jan 13 20:48:08.187412 containerd[1467]: time="2025-01-13T20:48:08.187338314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:1,}" Jan 13 20:48:08.308665 containerd[1467]: time="2025-01-13T20:48:08.308506272Z" level=error msg="Failed to destroy network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:08.309570 containerd[1467]: time="2025-01-13T20:48:08.309478034Z" level=error msg="encountered an error cleaning up failed sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:08.309688 containerd[1467]: time="2025-01-13T20:48:08.309630121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:08.310330 kubelet[1856]: E0113 20:48:08.310042 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:08.310330 kubelet[1856]: E0113 20:48:08.310089 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:08.310330 kubelet[1856]: E0113 20:48:08.310120 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:08.310441 kubelet[1856]: E0113 20:48:08.310158 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:08.349884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14-shm.mount: Deactivated successfully. Jan 13 20:48:08.959297 kubelet[1856]: E0113 20:48:08.959217 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:09.180917 kubelet[1856]: I0113 20:48:09.180201 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14" Jan 13 20:48:09.182082 containerd[1467]: time="2025-01-13T20:48:09.181561700Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:09.184570 containerd[1467]: time="2025-01-13T20:48:09.182737671Z" level=info msg="Ensure that sandbox 3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14 in task-service has been cleanup successfully" Jan 13 20:48:09.185046 containerd[1467]: time="2025-01-13T20:48:09.184833460Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:09.185046 containerd[1467]: time="2025-01-13T20:48:09.184877034Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:09.187050 systemd[1]: run-netns-cni\x2d98a24e7b\x2d706e\x2d0008\x2d0cbb\x2d98a68eb04b7e.mount: Deactivated successfully. Jan 13 20:48:09.188379 containerd[1467]: time="2025-01-13T20:48:09.188302226Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:09.189513 containerd[1467]: time="2025-01-13T20:48:09.188540356Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:09.189513 containerd[1467]: time="2025-01-13T20:48:09.188582736Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:09.192021 containerd[1467]: time="2025-01-13T20:48:09.191445521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:2,}" Jan 13 20:48:09.333524 containerd[1467]: time="2025-01-13T20:48:09.333413561Z" level=error msg="Failed to destroy network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:09.336499 containerd[1467]: time="2025-01-13T20:48:09.336431788Z" level=error msg="encountered an error cleaning up failed sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:09.336622 containerd[1467]: time="2025-01-13T20:48:09.336574027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:09.337528 kubelet[1856]: E0113 20:48:09.337086 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:09.337528 kubelet[1856]: E0113 20:48:09.337313 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:09.337528 kubelet[1856]: E0113 20:48:09.337407 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:09.338774 kubelet[1856]: E0113 20:48:09.337578 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:09.338006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726-shm.mount: Deactivated successfully. Jan 13 20:48:09.960145 kubelet[1856]: E0113 20:48:09.960092 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:10.108553 systemd[1]: Created slice kubepods-besteffort-podbdb79565_9ed0_4d93_bc64_5bbf39492302.slice - libcontainer container kubepods-besteffort-podbdb79565_9ed0_4d93_bc64_5bbf39492302.slice. Jan 13 20:48:10.188709 kubelet[1856]: I0113 20:48:10.187164 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726" Jan 13 20:48:10.190094 containerd[1467]: time="2025-01-13T20:48:10.189406969Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:10.190094 containerd[1467]: time="2025-01-13T20:48:10.189820548Z" level=info msg="Ensure that sandbox 5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726 in task-service has been cleanup successfully" Jan 13 20:48:10.193769 containerd[1467]: time="2025-01-13T20:48:10.193687634Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:10.196606 containerd[1467]: time="2025-01-13T20:48:10.195059493Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:10.196084 systemd[1]: run-netns-cni\x2deffbc254\x2da21b\x2d9615\x2d6690\x2ddddd4d400307.mount: Deactivated successfully. Jan 13 20:48:10.200353 containerd[1467]: time="2025-01-13T20:48:10.199536563Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:10.200353 containerd[1467]: time="2025-01-13T20:48:10.199722321Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:10.200353 containerd[1467]: time="2025-01-13T20:48:10.199817395Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:10.203095 containerd[1467]: time="2025-01-13T20:48:10.202602101Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:10.203095 containerd[1467]: time="2025-01-13T20:48:10.202828215Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:10.203095 containerd[1467]: time="2025-01-13T20:48:10.202861710Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:10.205073 containerd[1467]: time="2025-01-13T20:48:10.204935634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:3,}" Jan 13 20:48:10.217900 kubelet[1856]: I0113 20:48:10.216189 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdrzd\" (UniqueName: \"kubernetes.io/projected/bdb79565-9ed0-4d93-bc64-5bbf39492302-kube-api-access-zdrzd\") pod \"nginx-deployment-8587fbcb89-94b4b\" (UID: \"bdb79565-9ed0-4d93-bc64-5bbf39492302\") " pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:10.315015 containerd[1467]: time="2025-01-13T20:48:10.314858370Z" level=error msg="Failed to destroy network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.316272 containerd[1467]: time="2025-01-13T20:48:10.315376023Z" level=error msg="encountered an error cleaning up failed sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.316272 containerd[1467]: time="2025-01-13T20:48:10.315439157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.316530 kubelet[1856]: E0113 20:48:10.316490 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.317488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946-shm.mount: Deactivated successfully. Jan 13 20:48:10.318202 kubelet[1856]: E0113 20:48:10.318138 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:10.318450 kubelet[1856]: E0113 20:48:10.318385 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:10.319822 kubelet[1856]: E0113 20:48:10.319065 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:10.416854 containerd[1467]: time="2025-01-13T20:48:10.416820569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:0,}" Jan 13 20:48:10.522505 containerd[1467]: time="2025-01-13T20:48:10.517128762Z" level=error msg="Failed to destroy network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.522505 containerd[1467]: time="2025-01-13T20:48:10.517906857Z" level=error msg="encountered an error cleaning up failed sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.522505 containerd[1467]: time="2025-01-13T20:48:10.518480222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.522720 kubelet[1856]: E0113 20:48:10.518672 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:10.522720 kubelet[1856]: E0113 20:48:10.518731 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:10.522720 kubelet[1856]: E0113 20:48:10.518761 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:10.522821 kubelet[1856]: E0113 20:48:10.518809 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:10.938036 kubelet[1856]: E0113 20:48:10.937867 1856 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:10.960496 kubelet[1856]: E0113 20:48:10.960433 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:11.202276 kubelet[1856]: I0113 20:48:11.198612 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946" Jan 13 20:48:11.204551 containerd[1467]: time="2025-01-13T20:48:11.203553423Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:11.204551 containerd[1467]: time="2025-01-13T20:48:11.204195609Z" level=info msg="Ensure that sandbox e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946 in task-service has been cleanup successfully" Jan 13 20:48:11.208937 containerd[1467]: time="2025-01-13T20:48:11.208872515Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:11.209195 containerd[1467]: time="2025-01-13T20:48:11.209152044Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:11.210946 systemd[1]: run-netns-cni\x2db5cc182a\x2da624\x2d8714\x2db96f\x2dac2a6295895b.mount: Deactivated successfully. Jan 13 20:48:11.214772 containerd[1467]: time="2025-01-13T20:48:11.211921398Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:11.216346 containerd[1467]: time="2025-01-13T20:48:11.215249682Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:11.216346 containerd[1467]: time="2025-01-13T20:48:11.215378603Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:11.217521 containerd[1467]: time="2025-01-13T20:48:11.217440689Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:11.217916 containerd[1467]: time="2025-01-13T20:48:11.217874683Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:11.219068 containerd[1467]: time="2025-01-13T20:48:11.218045905Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:11.219161 kubelet[1856]: I0113 20:48:11.218489 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885" Jan 13 20:48:11.219545 containerd[1467]: time="2025-01-13T20:48:11.219499873Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:11.220131 containerd[1467]: time="2025-01-13T20:48:11.220098884Z" level=info msg="Ensure that sandbox a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885 in task-service has been cleanup successfully" Jan 13 20:48:11.222140 containerd[1467]: time="2025-01-13T20:48:11.222105053Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:11.224002 containerd[1467]: time="2025-01-13T20:48:11.222259137Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:11.224002 containerd[1467]: time="2025-01-13T20:48:11.222370546Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:11.224002 containerd[1467]: time="2025-01-13T20:48:11.222480069Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:11.224002 containerd[1467]: time="2025-01-13T20:48:11.222499969Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:11.223949 systemd[1]: run-netns-cni\x2d840fd88b\x2df63e\x2d426a\x2d7aad\x2d434f70e97aad.mount: Deactivated successfully. Jan 13 20:48:11.224456 containerd[1467]: time="2025-01-13T20:48:11.224435192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:4,}" Jan 13 20:48:11.234566 containerd[1467]: time="2025-01-13T20:48:11.234531895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:1,}" Jan 13 20:48:11.358935 containerd[1467]: time="2025-01-13T20:48:11.358873496Z" level=error msg="Failed to destroy network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.360985 containerd[1467]: time="2025-01-13T20:48:11.359985621Z" level=error msg="encountered an error cleaning up failed sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.360985 containerd[1467]: time="2025-01-13T20:48:11.360085445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.382389 kubelet[1856]: E0113 20:48:11.382325 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.383071 kubelet[1856]: E0113 20:48:11.383039 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:11.383169 kubelet[1856]: E0113 20:48:11.383149 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:11.383335 kubelet[1856]: E0113 20:48:11.383297 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:11.410188 containerd[1467]: time="2025-01-13T20:48:11.410132447Z" level=error msg="Failed to destroy network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.410689 containerd[1467]: time="2025-01-13T20:48:11.410663774Z" level=error msg="encountered an error cleaning up failed sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.410815 containerd[1467]: time="2025-01-13T20:48:11.410791160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.411219 kubelet[1856]: E0113 20:48:11.411183 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:11.411393 kubelet[1856]: E0113 20:48:11.411354 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:11.411813 kubelet[1856]: E0113 20:48:11.411459 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:11.411813 kubelet[1856]: E0113 20:48:11.411547 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:11.961447 kubelet[1856]: E0113 20:48:11.961395 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:12.196272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8-shm.mount: Deactivated successfully. Jan 13 20:48:12.226020 kubelet[1856]: I0113 20:48:12.224587 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8" Jan 13 20:48:12.227751 containerd[1467]: time="2025-01-13T20:48:12.227051779Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:12.227751 containerd[1467]: time="2025-01-13T20:48:12.227520788Z" level=info msg="Ensure that sandbox af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8 in task-service has been cleanup successfully" Jan 13 20:48:12.231826 containerd[1467]: time="2025-01-13T20:48:12.231757789Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:12.231920 systemd[1]: run-netns-cni\x2d5f90833f\x2d60be\x2d1979\x2db1cc\x2dfcecae14f897.mount: Deactivated successfully. Jan 13 20:48:12.233839 containerd[1467]: time="2025-01-13T20:48:12.233400095Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:12.237840 containerd[1467]: time="2025-01-13T20:48:12.237787403Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:12.238721 containerd[1467]: time="2025-01-13T20:48:12.238266169Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:12.238721 containerd[1467]: time="2025-01-13T20:48:12.238345176Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:12.239340 containerd[1467]: time="2025-01-13T20:48:12.239295983Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:12.240037 containerd[1467]: time="2025-01-13T20:48:12.239591939Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:12.240037 containerd[1467]: time="2025-01-13T20:48:12.239629300Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:12.240573 containerd[1467]: time="2025-01-13T20:48:12.240529265Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:12.241265 containerd[1467]: time="2025-01-13T20:48:12.240875883Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:12.241265 containerd[1467]: time="2025-01-13T20:48:12.240914930Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:12.241436 kubelet[1856]: I0113 20:48:12.241379 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df" Jan 13 20:48:12.243642 containerd[1467]: time="2025-01-13T20:48:12.242491938Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:12.243642 containerd[1467]: time="2025-01-13T20:48:12.242654100Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:12.243642 containerd[1467]: time="2025-01-13T20:48:12.242679947Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:12.243642 containerd[1467]: time="2025-01-13T20:48:12.242786580Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:12.244664 containerd[1467]: time="2025-01-13T20:48:12.244381154Z" level=info msg="Ensure that sandbox fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df in task-service has been cleanup successfully" Jan 13 20:48:12.245649 containerd[1467]: time="2025-01-13T20:48:12.244832356Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:12.247257 systemd[1]: run-netns-cni\x2dafbe24af\x2d9a6d\x2d8f91\x2d549a\x2d5a8c6cbc9db2.mount: Deactivated successfully. Jan 13 20:48:12.247665 containerd[1467]: time="2025-01-13T20:48:12.247510578Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:12.250222 containerd[1467]: time="2025-01-13T20:48:12.250169347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:5,}" Jan 13 20:48:12.262889 containerd[1467]: time="2025-01-13T20:48:12.262822463Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:12.263589 containerd[1467]: time="2025-01-13T20:48:12.263544647Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:12.263870 containerd[1467]: time="2025-01-13T20:48:12.263831349Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:12.265942 containerd[1467]: time="2025-01-13T20:48:12.265889662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:2,}" Jan 13 20:48:12.380326 containerd[1467]: time="2025-01-13T20:48:12.380202150Z" level=error msg="Failed to destroy network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.380648 containerd[1467]: time="2025-01-13T20:48:12.380615438Z" level=error msg="encountered an error cleaning up failed sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.380718 containerd[1467]: time="2025-01-13T20:48:12.380690874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.381112 kubelet[1856]: E0113 20:48:12.381044 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.381577 kubelet[1856]: E0113 20:48:12.381175 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:12.381577 kubelet[1856]: E0113 20:48:12.381208 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:12.381577 kubelet[1856]: E0113 20:48:12.381287 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:12.401182 containerd[1467]: time="2025-01-13T20:48:12.401113115Z" level=error msg="Failed to destroy network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.401516 containerd[1467]: time="2025-01-13T20:48:12.401474227Z" level=error msg="encountered an error cleaning up failed sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.401575 containerd[1467]: time="2025-01-13T20:48:12.401549029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.402138 kubelet[1856]: E0113 20:48:12.401766 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:12.402138 kubelet[1856]: E0113 20:48:12.401826 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:12.402138 kubelet[1856]: E0113 20:48:12.401848 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:12.402257 kubelet[1856]: E0113 20:48:12.401893 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:12.962691 kubelet[1856]: E0113 20:48:12.962564 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:13.197846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692-shm.mount: Deactivated successfully. Jan 13 20:48:13.256826 kubelet[1856]: I0113 20:48:13.256230 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692" Jan 13 20:48:13.259290 containerd[1467]: time="2025-01-13T20:48:13.257684432Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:13.259290 containerd[1467]: time="2025-01-13T20:48:13.258119119Z" level=info msg="Ensure that sandbox d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692 in task-service has been cleanup successfully" Jan 13 20:48:13.262035 containerd[1467]: time="2025-01-13T20:48:13.260316718Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:13.262035 containerd[1467]: time="2025-01-13T20:48:13.260366153Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:13.264864 containerd[1467]: time="2025-01-13T20:48:13.264812061Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:13.265226 containerd[1467]: time="2025-01-13T20:48:13.265186153Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:13.265418 containerd[1467]: time="2025-01-13T20:48:13.265381342Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:13.267251 containerd[1467]: time="2025-01-13T20:48:13.267204166Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:13.267610 containerd[1467]: time="2025-01-13T20:48:13.267570071Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:13.267790 containerd[1467]: time="2025-01-13T20:48:13.267756440Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:13.267846 systemd[1]: run-netns-cni\x2d6de895dd\x2d9118\x2d88ed\x2d8337\x2de3fdce80a992.mount: Deactivated successfully. Jan 13 20:48:13.270637 containerd[1467]: time="2025-01-13T20:48:13.270571642Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:13.270810 containerd[1467]: time="2025-01-13T20:48:13.270763149Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:13.272213 containerd[1467]: time="2025-01-13T20:48:13.270802648Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:13.273008 containerd[1467]: time="2025-01-13T20:48:13.272590740Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:13.273008 containerd[1467]: time="2025-01-13T20:48:13.272783973Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:13.273008 containerd[1467]: time="2025-01-13T20:48:13.272812925Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:13.273692 containerd[1467]: time="2025-01-13T20:48:13.273619246Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:13.274991 containerd[1467]: time="2025-01-13T20:48:13.274860152Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:13.275215 containerd[1467]: time="2025-01-13T20:48:13.274948846Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:13.275585 kubelet[1856]: I0113 20:48:13.275320 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e" Jan 13 20:48:13.277142 containerd[1467]: time="2025-01-13T20:48:13.276501866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:6,}" Jan 13 20:48:13.277520 containerd[1467]: time="2025-01-13T20:48:13.277477897Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:13.278134 containerd[1467]: time="2025-01-13T20:48:13.278091989Z" level=info msg="Ensure that sandbox ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e in task-service has been cleanup successfully" Jan 13 20:48:13.278624 containerd[1467]: time="2025-01-13T20:48:13.278585493Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:13.278785 containerd[1467]: time="2025-01-13T20:48:13.278754139Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:13.279595 containerd[1467]: time="2025-01-13T20:48:13.279553174Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:13.279936 containerd[1467]: time="2025-01-13T20:48:13.279890125Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:13.280210 containerd[1467]: time="2025-01-13T20:48:13.280049489Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:13.281913 containerd[1467]: time="2025-01-13T20:48:13.281630708Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:13.281913 containerd[1467]: time="2025-01-13T20:48:13.281784031Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:13.281913 containerd[1467]: time="2025-01-13T20:48:13.281808808Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:13.283277 containerd[1467]: time="2025-01-13T20:48:13.282804488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:3,}" Jan 13 20:48:13.286746 systemd[1]: run-netns-cni\x2d22cce465\x2d5de8\x2d695e\x2dada2\x2da297ec85880d.mount: Deactivated successfully. Jan 13 20:48:13.428980 containerd[1467]: time="2025-01-13T20:48:13.428846024Z" level=error msg="Failed to destroy network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.429553 containerd[1467]: time="2025-01-13T20:48:13.429433439Z" level=error msg="encountered an error cleaning up failed sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.429553 containerd[1467]: time="2025-01-13T20:48:13.429515420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.430247 kubelet[1856]: E0113 20:48:13.430087 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.430247 kubelet[1856]: E0113 20:48:13.430150 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:13.430247 kubelet[1856]: E0113 20:48:13.430172 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:13.430391 kubelet[1856]: E0113 20:48:13.430223 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:13.439004 containerd[1467]: time="2025-01-13T20:48:13.438924493Z" level=error msg="Failed to destroy network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.439346 containerd[1467]: time="2025-01-13T20:48:13.439281435Z" level=error msg="encountered an error cleaning up failed sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.439486 containerd[1467]: time="2025-01-13T20:48:13.439360013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.439768 kubelet[1856]: E0113 20:48:13.439732 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:13.439840 kubelet[1856]: E0113 20:48:13.439788 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:13.439840 kubelet[1856]: E0113 20:48:13.439812 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:13.440016 kubelet[1856]: E0113 20:48:13.439879 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:13.963727 kubelet[1856]: E0113 20:48:13.963681 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:14.196843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c-shm.mount: Deactivated successfully. Jan 13 20:48:14.280757 kubelet[1856]: I0113 20:48:14.280439 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c" Jan 13 20:48:14.281636 containerd[1467]: time="2025-01-13T20:48:14.281609403Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:14.282220 containerd[1467]: time="2025-01-13T20:48:14.282016045Z" level=info msg="Ensure that sandbox b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c in task-service has been cleanup successfully" Jan 13 20:48:14.283717 systemd[1]: run-netns-cni\x2d70deef9b\x2d3cbc\x2d2080\x2d57f8\x2d34a2e002d551.mount: Deactivated successfully. Jan 13 20:48:14.284536 containerd[1467]: time="2025-01-13T20:48:14.284503013Z" level=info msg="TearDown network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" successfully" Jan 13 20:48:14.284814 containerd[1467]: time="2025-01-13T20:48:14.284791432Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" returns successfully" Jan 13 20:48:14.285454 containerd[1467]: time="2025-01-13T20:48:14.285272534Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:14.285454 containerd[1467]: time="2025-01-13T20:48:14.285342025Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:14.285454 containerd[1467]: time="2025-01-13T20:48:14.285353382Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:14.286889 containerd[1467]: time="2025-01-13T20:48:14.286870823Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:14.287079 containerd[1467]: time="2025-01-13T20:48:14.287062962Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:14.287476 containerd[1467]: time="2025-01-13T20:48:14.287336202Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:14.288429 containerd[1467]: time="2025-01-13T20:48:14.288398507Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:14.288502 containerd[1467]: time="2025-01-13T20:48:14.288482998Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:14.288556 containerd[1467]: time="2025-01-13T20:48:14.288499714Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:14.289101 containerd[1467]: time="2025-01-13T20:48:14.289054179Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:14.289379 containerd[1467]: time="2025-01-13T20:48:14.289295392Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:14.289379 containerd[1467]: time="2025-01-13T20:48:14.289312649Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:14.290134 containerd[1467]: time="2025-01-13T20:48:14.290117127Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:14.290270 containerd[1467]: time="2025-01-13T20:48:14.290233514Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:14.290270 containerd[1467]: time="2025-01-13T20:48:14.290248073Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:14.290338 kubelet[1856]: I0113 20:48:14.290324 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f" Jan 13 20:48:14.292980 containerd[1467]: time="2025-01-13T20:48:14.292072916Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:14.293401 containerd[1467]: time="2025-01-13T20:48:14.293052144Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:14.293401 containerd[1467]: time="2025-01-13T20:48:14.293073747Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:14.293401 containerd[1467]: time="2025-01-13T20:48:14.293113328Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:14.293401 containerd[1467]: time="2025-01-13T20:48:14.293291280Z" level=info msg="Ensure that sandbox b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f in task-service has been cleanup successfully" Jan 13 20:48:14.293688 containerd[1467]: time="2025-01-13T20:48:14.293613654Z" level=info msg="TearDown network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" successfully" Jan 13 20:48:14.293688 containerd[1467]: time="2025-01-13T20:48:14.293632004Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" returns successfully" Jan 13 20:48:14.295581 systemd[1]: run-netns-cni\x2deffe0422\x2dae07\x2de5b9\x2de3bf\x2d46f479db77ab.mount: Deactivated successfully. Jan 13 20:48:14.298147 containerd[1467]: time="2025-01-13T20:48:14.297792301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:7,}" Jan 13 20:48:14.298147 containerd[1467]: time="2025-01-13T20:48:14.297882531Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:14.298147 containerd[1467]: time="2025-01-13T20:48:14.297991734Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:14.298147 containerd[1467]: time="2025-01-13T20:48:14.298005700Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:14.300066 containerd[1467]: time="2025-01-13T20:48:14.300039088Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:14.300559 containerd[1467]: time="2025-01-13T20:48:14.300111901Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:14.300559 containerd[1467]: time="2025-01-13T20:48:14.300128887Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:14.302913 containerd[1467]: time="2025-01-13T20:48:14.302883053Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:14.303005 containerd[1467]: time="2025-01-13T20:48:14.302980557Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:14.303005 containerd[1467]: time="2025-01-13T20:48:14.302999410Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:14.303471 containerd[1467]: time="2025-01-13T20:48:14.303439454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:4,}" Jan 13 20:48:14.423834 containerd[1467]: time="2025-01-13T20:48:14.423782097Z" level=error msg="Failed to destroy network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.424694 containerd[1467]: time="2025-01-13T20:48:14.424661534Z" level=error msg="encountered an error cleaning up failed sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.424794 containerd[1467]: time="2025-01-13T20:48:14.424755887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.425188 kubelet[1856]: E0113 20:48:14.425153 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.425472 kubelet[1856]: E0113 20:48:14.425326 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:14.425834 kubelet[1856]: E0113 20:48:14.425407 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:14.425834 kubelet[1856]: E0113 20:48:14.425598 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:14.437697 containerd[1467]: time="2025-01-13T20:48:14.437645636Z" level=error msg="Failed to destroy network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.438846 containerd[1467]: time="2025-01-13T20:48:14.438666262Z" level=error msg="encountered an error cleaning up failed sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.438846 containerd[1467]: time="2025-01-13T20:48:14.438736496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.439243 kubelet[1856]: E0113 20:48:14.439103 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:14.439243 kubelet[1856]: E0113 20:48:14.439159 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:14.439243 kubelet[1856]: E0113 20:48:14.439181 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:14.439382 kubelet[1856]: E0113 20:48:14.439219 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:14.964046 kubelet[1856]: E0113 20:48:14.964004 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:15.195744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3-shm.mount: Deactivated successfully. Jan 13 20:48:15.294900 kubelet[1856]: I0113 20:48:15.294791 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3" Jan 13 20:48:15.296988 containerd[1467]: time="2025-01-13T20:48:15.295952275Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" Jan 13 20:48:15.296988 containerd[1467]: time="2025-01-13T20:48:15.296145974Z" level=info msg="Ensure that sandbox 57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3 in task-service has been cleanup successfully" Jan 13 20:48:15.297868 systemd[1]: run-netns-cni\x2d79dbc3e1\x2d6a67\x2d7a5e\x2d139d\x2d1f77fa6bc7e8.mount: Deactivated successfully. Jan 13 20:48:15.298356 containerd[1467]: time="2025-01-13T20:48:15.298022720Z" level=info msg="TearDown network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" successfully" Jan 13 20:48:15.298356 containerd[1467]: time="2025-01-13T20:48:15.298041168Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" returns successfully" Jan 13 20:48:15.300112 containerd[1467]: time="2025-01-13T20:48:15.299909447Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:15.300262 containerd[1467]: time="2025-01-13T20:48:15.300087147Z" level=info msg="TearDown network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" successfully" Jan 13 20:48:15.300262 containerd[1467]: time="2025-01-13T20:48:15.300252767Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" returns successfully" Jan 13 20:48:15.300924 containerd[1467]: time="2025-01-13T20:48:15.300900205Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:15.301316 containerd[1467]: time="2025-01-13T20:48:15.301288858Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:15.301316 containerd[1467]: time="2025-01-13T20:48:15.301307316Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:15.302389 containerd[1467]: time="2025-01-13T20:48:15.301649232Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:15.302389 containerd[1467]: time="2025-01-13T20:48:15.301795582Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:15.302389 containerd[1467]: time="2025-01-13T20:48:15.301808603Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:15.302389 containerd[1467]: time="2025-01-13T20:48:15.302056824Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:15.302389 containerd[1467]: time="2025-01-13T20:48:15.302119662Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:15.302389 containerd[1467]: time="2025-01-13T20:48:15.302130857Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:15.302571 containerd[1467]: time="2025-01-13T20:48:15.302484350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:5,}" Jan 13 20:48:15.306017 kubelet[1856]: I0113 20:48:15.305809 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5" Jan 13 20:48:15.306790 containerd[1467]: time="2025-01-13T20:48:15.306763826Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" Jan 13 20:48:15.306982 containerd[1467]: time="2025-01-13T20:48:15.306920820Z" level=info msg="Ensure that sandbox ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5 in task-service has been cleanup successfully" Jan 13 20:48:15.308712 containerd[1467]: time="2025-01-13T20:48:15.308686747Z" level=info msg="TearDown network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" successfully" Jan 13 20:48:15.308712 containerd[1467]: time="2025-01-13T20:48:15.308706820Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" returns successfully" Jan 13 20:48:15.309360 containerd[1467]: time="2025-01-13T20:48:15.309091962Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:15.309360 containerd[1467]: time="2025-01-13T20:48:15.309182226Z" level=info msg="TearDown network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" successfully" Jan 13 20:48:15.309360 containerd[1467]: time="2025-01-13T20:48:15.309196562Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" returns successfully" Jan 13 20:48:15.309329 systemd[1]: run-netns-cni\x2d48f5e2f4\x2d0992\x2d9a96\x2da9ce\x2db90bf59f775c.mount: Deactivated successfully. Jan 13 20:48:15.311002 containerd[1467]: time="2025-01-13T20:48:15.310870569Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:15.311002 containerd[1467]: time="2025-01-13T20:48:15.310997518Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:15.311177 containerd[1467]: time="2025-01-13T20:48:15.311011814Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:15.311340 containerd[1467]: time="2025-01-13T20:48:15.311274590Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:15.312975 containerd[1467]: time="2025-01-13T20:48:15.311574133Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:15.312975 containerd[1467]: time="2025-01-13T20:48:15.311592792Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:15.312975 containerd[1467]: time="2025-01-13T20:48:15.312288310Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:15.312975 containerd[1467]: time="2025-01-13T20:48:15.312566686Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:15.312975 containerd[1467]: time="2025-01-13T20:48:15.312580359Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:15.313572 containerd[1467]: time="2025-01-13T20:48:15.313543850Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:15.313790 containerd[1467]: time="2025-01-13T20:48:15.313771758Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:15.313905 containerd[1467]: time="2025-01-13T20:48:15.313887934Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:15.314764 containerd[1467]: time="2025-01-13T20:48:15.314573982Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:15.315312 containerd[1467]: time="2025-01-13T20:48:15.315292744Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:15.315512 containerd[1467]: time="2025-01-13T20:48:15.315495994Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:15.316203 containerd[1467]: time="2025-01-13T20:48:15.316184350Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:15.319101 containerd[1467]: time="2025-01-13T20:48:15.316293915Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:15.319101 containerd[1467]: time="2025-01-13T20:48:15.316307588Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:15.319101 containerd[1467]: time="2025-01-13T20:48:15.316660839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:8,}" Jan 13 20:48:15.434606 containerd[1467]: time="2025-01-13T20:48:15.434560610Z" level=error msg="Failed to destroy network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.436821 containerd[1467]: time="2025-01-13T20:48:15.435940663Z" level=error msg="encountered an error cleaning up failed sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.436821 containerd[1467]: time="2025-01-13T20:48:15.436520097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.436918 kubelet[1856]: E0113 20:48:15.436828 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.436918 kubelet[1856]: E0113 20:48:15.436884 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:15.436918 kubelet[1856]: E0113 20:48:15.436908 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:15.437248 kubelet[1856]: E0113 20:48:15.437116 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:15.445650 containerd[1467]: time="2025-01-13T20:48:15.445610120Z" level=error msg="Failed to destroy network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.446266 containerd[1467]: time="2025-01-13T20:48:15.446230392Z" level=error msg="encountered an error cleaning up failed sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.446400 containerd[1467]: time="2025-01-13T20:48:15.446300974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.446824 kubelet[1856]: E0113 20:48:15.446782 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:15.446895 kubelet[1856]: E0113 20:48:15.446848 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:15.446895 kubelet[1856]: E0113 20:48:15.446870 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:15.447024 kubelet[1856]: E0113 20:48:15.446913 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:15.964355 kubelet[1856]: E0113 20:48:15.964318 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:16.195579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9-shm.mount: Deactivated successfully. Jan 13 20:48:16.309926 kubelet[1856]: I0113 20:48:16.309589 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9" Jan 13 20:48:16.310992 containerd[1467]: time="2025-01-13T20:48:16.310756526Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\"" Jan 13 20:48:16.312792 containerd[1467]: time="2025-01-13T20:48:16.310997376Z" level=info msg="Ensure that sandbox 6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9 in task-service has been cleanup successfully" Jan 13 20:48:16.313401 containerd[1467]: time="2025-01-13T20:48:16.312989158Z" level=info msg="TearDown network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" successfully" Jan 13 20:48:16.313401 containerd[1467]: time="2025-01-13T20:48:16.313240067Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" returns successfully" Jan 13 20:48:16.313902 containerd[1467]: time="2025-01-13T20:48:16.313571277Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" Jan 13 20:48:16.313902 containerd[1467]: time="2025-01-13T20:48:16.313638558Z" level=info msg="TearDown network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" successfully" Jan 13 20:48:16.313902 containerd[1467]: time="2025-01-13T20:48:16.313649581Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" returns successfully" Jan 13 20:48:16.313979 systemd[1]: run-netns-cni\x2d7bff2c47\x2d3e6f\x2da40a\x2dc4b7\x2df37ed46cee71.mount: Deactivated successfully. Jan 13 20:48:16.315301 containerd[1467]: time="2025-01-13T20:48:16.315272701Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:16.315494 containerd[1467]: time="2025-01-13T20:48:16.315350142Z" level=info msg="TearDown network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" successfully" Jan 13 20:48:16.315494 containerd[1467]: time="2025-01-13T20:48:16.315365899Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" returns successfully" Jan 13 20:48:16.316984 containerd[1467]: time="2025-01-13T20:48:16.316825861Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:16.317190 containerd[1467]: time="2025-01-13T20:48:16.317086099Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:16.317190 containerd[1467]: time="2025-01-13T20:48:16.317185316Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:16.317688 containerd[1467]: time="2025-01-13T20:48:16.317660606Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:16.317945 containerd[1467]: time="2025-01-13T20:48:16.317728088Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:16.317945 containerd[1467]: time="2025-01-13T20:48:16.317744246Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:16.318348 containerd[1467]: time="2025-01-13T20:48:16.318098353Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:16.318348 containerd[1467]: time="2025-01-13T20:48:16.318216797Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:16.318348 containerd[1467]: time="2025-01-13T20:48:16.318229365Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:16.318942 containerd[1467]: time="2025-01-13T20:48:16.318915225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:6,}" Jan 13 20:48:16.330410 kubelet[1856]: I0113 20:48:16.330383 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc" Jan 13 20:48:16.331385 containerd[1467]: time="2025-01-13T20:48:16.330997607Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\"" Jan 13 20:48:16.331385 containerd[1467]: time="2025-01-13T20:48:16.331252930Z" level=info msg="Ensure that sandbox a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc in task-service has been cleanup successfully" Jan 13 20:48:16.331571 containerd[1467]: time="2025-01-13T20:48:16.331552916Z" level=info msg="TearDown network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" successfully" Jan 13 20:48:16.331640 containerd[1467]: time="2025-01-13T20:48:16.331623457Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" returns successfully" Jan 13 20:48:16.334224 systemd[1]: run-netns-cni\x2d97ac6fac\x2dd9a5\x2d65ae\x2d98a0\x2d25f4cb2e2513.mount: Deactivated successfully. Jan 13 20:48:16.334795 containerd[1467]: time="2025-01-13T20:48:16.334066988Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" Jan 13 20:48:16.334900 containerd[1467]: time="2025-01-13T20:48:16.334875133Z" level=info msg="TearDown network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" successfully" Jan 13 20:48:16.334900 containerd[1467]: time="2025-01-13T20:48:16.334896165Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" returns successfully" Jan 13 20:48:16.336266 containerd[1467]: time="2025-01-13T20:48:16.336227805Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:16.336312 containerd[1467]: time="2025-01-13T20:48:16.336298537Z" level=info msg="TearDown network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" successfully" Jan 13 20:48:16.336348 containerd[1467]: time="2025-01-13T20:48:16.336312708Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" returns successfully" Jan 13 20:48:16.337408 containerd[1467]: time="2025-01-13T20:48:16.337347801Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:16.337984 containerd[1467]: time="2025-01-13T20:48:16.337786202Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:16.337984 containerd[1467]: time="2025-01-13T20:48:16.337806151Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:16.338218 containerd[1467]: time="2025-01-13T20:48:16.338195254Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:16.338578 containerd[1467]: time="2025-01-13T20:48:16.338367779Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:16.338733 containerd[1467]: time="2025-01-13T20:48:16.338648618Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:16.339134 containerd[1467]: time="2025-01-13T20:48:16.338937842Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:16.339409 containerd[1467]: time="2025-01-13T20:48:16.339297547Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:16.339477 containerd[1467]: time="2025-01-13T20:48:16.339462079Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:16.340006 containerd[1467]: time="2025-01-13T20:48:16.339981752Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:16.340068 containerd[1467]: time="2025-01-13T20:48:16.340054008Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:16.340105 containerd[1467]: time="2025-01-13T20:48:16.340067669Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:16.340479 containerd[1467]: time="2025-01-13T20:48:16.340459730Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:16.342058 containerd[1467]: time="2025-01-13T20:48:16.342040363Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:16.342230 containerd[1467]: time="2025-01-13T20:48:16.342118295Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:16.343988 containerd[1467]: time="2025-01-13T20:48:16.343216156Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:16.344130 containerd[1467]: time="2025-01-13T20:48:16.344111301Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:16.344255 containerd[1467]: time="2025-01-13T20:48:16.344178983Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:16.347578 containerd[1467]: time="2025-01-13T20:48:16.345638976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:9,}" Jan 13 20:48:16.440002 containerd[1467]: time="2025-01-13T20:48:16.439930290Z" level=error msg="Failed to destroy network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.440480 containerd[1467]: time="2025-01-13T20:48:16.440440706Z" level=error msg="encountered an error cleaning up failed sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.440608 containerd[1467]: time="2025-01-13T20:48:16.440584906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.441669 kubelet[1856]: E0113 20:48:16.441279 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.441669 kubelet[1856]: E0113 20:48:16.441373 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:16.441669 kubelet[1856]: E0113 20:48:16.441397 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:16.441803 kubelet[1856]: E0113 20:48:16.441443 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:16.471977 containerd[1467]: time="2025-01-13T20:48:16.471768011Z" level=error msg="Failed to destroy network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.474017 containerd[1467]: time="2025-01-13T20:48:16.472092581Z" level=error msg="encountered an error cleaning up failed sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.474086 containerd[1467]: time="2025-01-13T20:48:16.474050051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.474273 kubelet[1856]: E0113 20:48:16.474234 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:16.474354 kubelet[1856]: E0113 20:48:16.474298 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:16.474354 kubelet[1856]: E0113 20:48:16.474325 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:16.474548 kubelet[1856]: E0113 20:48:16.474379 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:16.965985 kubelet[1856]: E0113 20:48:16.964990 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:17.195408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059-shm.mount: Deactivated successfully. Jan 13 20:48:17.195509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728589795.mount: Deactivated successfully. Jan 13 20:48:17.212351 containerd[1467]: time="2025-01-13T20:48:17.212270371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:17.213577 containerd[1467]: time="2025-01-13T20:48:17.213548282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:48:17.214867 containerd[1467]: time="2025-01-13T20:48:17.214720101Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:17.217434 containerd[1467]: time="2025-01-13T20:48:17.217288708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:17.219027 containerd[1467]: time="2025-01-13T20:48:17.217987910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.036666437s" Jan 13 20:48:17.219027 containerd[1467]: time="2025-01-13T20:48:17.218181031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:48:17.238395 containerd[1467]: time="2025-01-13T20:48:17.238333678Z" level=info msg="CreateContainer within sandbox \"49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:48:17.262783 containerd[1467]: time="2025-01-13T20:48:17.262599634Z" level=info msg="CreateContainer within sandbox \"49e21ca10e36054d89b9ce59b1a52a443b580a981e4cf04c3187736c9a01e35a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2ea2c7b27f2f494ece519f2b01b83af09d9b769c7c702d4c7d882a8a60d53a7c\"" Jan 13 20:48:17.265618 containerd[1467]: time="2025-01-13T20:48:17.265534244Z" level=info msg="StartContainer for \"2ea2c7b27f2f494ece519f2b01b83af09d9b769c7c702d4c7d882a8a60d53a7c\"" Jan 13 20:48:17.302111 systemd[1]: Started cri-containerd-2ea2c7b27f2f494ece519f2b01b83af09d9b769c7c702d4c7d882a8a60d53a7c.scope - libcontainer container 2ea2c7b27f2f494ece519f2b01b83af09d9b769c7c702d4c7d882a8a60d53a7c. Jan 13 20:48:17.342000 containerd[1467]: time="2025-01-13T20:48:17.340312194Z" level=info msg="StartContainer for \"2ea2c7b27f2f494ece519f2b01b83af09d9b769c7c702d4c7d882a8a60d53a7c\" returns successfully" Jan 13 20:48:17.349995 kubelet[1856]: I0113 20:48:17.349030 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b" Jan 13 20:48:17.350335 containerd[1467]: time="2025-01-13T20:48:17.349537758Z" level=info msg="StopPodSandbox for \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\"" Jan 13 20:48:17.351150 containerd[1467]: time="2025-01-13T20:48:17.350692890Z" level=info msg="Ensure that sandbox 17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b in task-service has been cleanup successfully" Jan 13 20:48:17.351848 containerd[1467]: time="2025-01-13T20:48:17.351772696Z" level=info msg="TearDown network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" successfully" Jan 13 20:48:17.351848 containerd[1467]: time="2025-01-13T20:48:17.351790908Z" level=info msg="StopPodSandbox for \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" returns successfully" Jan 13 20:48:17.352774 containerd[1467]: time="2025-01-13T20:48:17.352568473Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\"" Jan 13 20:48:17.352774 containerd[1467]: time="2025-01-13T20:48:17.352688665Z" level=info msg="TearDown network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" successfully" Jan 13 20:48:17.352774 containerd[1467]: time="2025-01-13T20:48:17.352730384Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" returns successfully" Jan 13 20:48:17.353352 containerd[1467]: time="2025-01-13T20:48:17.353126052Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" Jan 13 20:48:17.353352 containerd[1467]: time="2025-01-13T20:48:17.353201928Z" level=info msg="TearDown network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" successfully" Jan 13 20:48:17.353352 containerd[1467]: time="2025-01-13T20:48:17.353214243Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" returns successfully" Jan 13 20:48:17.353982 containerd[1467]: time="2025-01-13T20:48:17.353817963Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:17.354716 containerd[1467]: time="2025-01-13T20:48:17.354626637Z" level=info msg="TearDown network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" successfully" Jan 13 20:48:17.354716 containerd[1467]: time="2025-01-13T20:48:17.354643635Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" returns successfully" Jan 13 20:48:17.355730 containerd[1467]: time="2025-01-13T20:48:17.355614893Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:17.355730 containerd[1467]: time="2025-01-13T20:48:17.355684360Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:17.355730 containerd[1467]: time="2025-01-13T20:48:17.355695582Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:17.357029 containerd[1467]: time="2025-01-13T20:48:17.356247756Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:17.357029 containerd[1467]: time="2025-01-13T20:48:17.356324033Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:17.357029 containerd[1467]: time="2025-01-13T20:48:17.356336860Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:17.357194 kubelet[1856]: I0113 20:48:17.356559 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059" Jan 13 20:48:17.357433 containerd[1467]: time="2025-01-13T20:48:17.357413046Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:17.357636 containerd[1467]: time="2025-01-13T20:48:17.357527211Z" level=info msg="StopPodSandbox for \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\"" Jan 13 20:48:17.358028 containerd[1467]: time="2025-01-13T20:48:17.357919549Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:17.358342 containerd[1467]: time="2025-01-13T20:48:17.357937159Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:17.358937 containerd[1467]: time="2025-01-13T20:48:17.358586278Z" level=info msg="Ensure that sandbox 2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059 in task-service has been cleanup successfully" Jan 13 20:48:17.359248 containerd[1467]: time="2025-01-13T20:48:17.359229572Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:17.359514 containerd[1467]: time="2025-01-13T20:48:17.359414188Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:17.359514 containerd[1467]: time="2025-01-13T20:48:17.359429802Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:17.359900 containerd[1467]: time="2025-01-13T20:48:17.359743205Z" level=info msg="TearDown network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" successfully" Jan 13 20:48:17.359900 containerd[1467]: time="2025-01-13T20:48:17.359761226Z" level=info msg="StopPodSandbox for \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" returns successfully" Jan 13 20:48:17.360523 containerd[1467]: time="2025-01-13T20:48:17.360245586Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:17.360523 containerd[1467]: time="2025-01-13T20:48:17.360344878Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:17.360523 containerd[1467]: time="2025-01-13T20:48:17.360357193Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:17.361071 containerd[1467]: time="2025-01-13T20:48:17.361052043Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\"" Jan 13 20:48:17.361240 containerd[1467]: time="2025-01-13T20:48:17.361179688Z" level=info msg="TearDown network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" successfully" Jan 13 20:48:17.361240 containerd[1467]: time="2025-01-13T20:48:17.361195312Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" returns successfully" Jan 13 20:48:17.361619 containerd[1467]: time="2025-01-13T20:48:17.361379958Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:17.361861 containerd[1467]: time="2025-01-13T20:48:17.361589154Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" Jan 13 20:48:17.362377 containerd[1467]: time="2025-01-13T20:48:17.361945800Z" level=info msg="TearDown network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" successfully" Jan 13 20:48:17.362455 containerd[1467]: time="2025-01-13T20:48:17.361844361Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:17.362613 containerd[1467]: time="2025-01-13T20:48:17.362529553Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:17.363062 containerd[1467]: time="2025-01-13T20:48:17.362706838Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" returns successfully" Jan 13 20:48:17.363263 containerd[1467]: time="2025-01-13T20:48:17.363245954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:10,}" Jan 13 20:48:17.363558 containerd[1467]: time="2025-01-13T20:48:17.363540594Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:17.364211 containerd[1467]: time="2025-01-13T20:48:17.364126052Z" level=info msg="TearDown network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" successfully" Jan 13 20:48:17.364211 containerd[1467]: time="2025-01-13T20:48:17.364148134Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" returns successfully" Jan 13 20:48:17.364837 containerd[1467]: time="2025-01-13T20:48:17.364491643Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:17.364837 containerd[1467]: time="2025-01-13T20:48:17.364599410Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:17.364837 containerd[1467]: time="2025-01-13T20:48:17.364615546Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:17.365436 containerd[1467]: time="2025-01-13T20:48:17.365258428Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:17.365436 containerd[1467]: time="2025-01-13T20:48:17.365356778Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:17.365436 containerd[1467]: time="2025-01-13T20:48:17.365389892Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:17.366141 containerd[1467]: time="2025-01-13T20:48:17.365720525Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:17.366141 containerd[1467]: time="2025-01-13T20:48:17.365789310Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:17.366141 containerd[1467]: time="2025-01-13T20:48:17.365802218Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:17.367368 containerd[1467]: time="2025-01-13T20:48:17.367105701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:7,}" Jan 13 20:48:17.425838 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:48:17.425989 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jan 13 20:48:17.464547 containerd[1467]: time="2025-01-13T20:48:17.464278686Z" level=error msg="Failed to destroy network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.465784 containerd[1467]: time="2025-01-13T20:48:17.465754792Z" level=error msg="encountered an error cleaning up failed sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.465882 containerd[1467]: time="2025-01-13T20:48:17.465815605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.466424 kubelet[1856]: E0113 20:48:17.466027 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.466424 kubelet[1856]: E0113 20:48:17.466081 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:17.466424 kubelet[1856]: E0113 20:48:17.466105 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl89x" Jan 13 20:48:17.466624 kubelet[1856]: E0113 20:48:17.466144 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl89x_calico-system(1d32a904-00d3-418b-b6e0-17302fc19462)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl89x" podUID="1d32a904-00d3-418b-b6e0-17302fc19462" Jan 13 20:48:17.486705 containerd[1467]: time="2025-01-13T20:48:17.486541144Z" level=error msg="Failed to destroy network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.487181 containerd[1467]: time="2025-01-13T20:48:17.487065227Z" level=error msg="encountered an error cleaning up failed sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.487181 containerd[1467]: time="2025-01-13T20:48:17.487119511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.487769 kubelet[1856]: E0113 20:48:17.487466 1856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:48:17.487769 kubelet[1856]: E0113 20:48:17.487521 1856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:17.487769 kubelet[1856]: E0113 20:48:17.487541 1856 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-94b4b" Jan 13 20:48:17.487988 kubelet[1856]: E0113 20:48:17.487585 1856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-94b4b_default(bdb79565-9ed0-4d93-bc64-5bbf39492302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-94b4b" podUID="bdb79565-9ed0-4d93-bc64-5bbf39492302" Jan 13 20:48:17.966209 kubelet[1856]: E0113 20:48:17.966082 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:18.202883 systemd[1]: run-netns-cni\x2d64f0352e\x2d886c\x2d0f7a\x2d0c62\x2dfd247a3d9c8a.mount: Deactivated successfully. Jan 13 20:48:18.203133 systemd[1]: run-netns-cni\x2df33f79bb\x2d7794\x2d7ff0\x2d7c25\x2d5c2968160ef8.mount: Deactivated successfully. Jan 13 20:48:18.371507 kubelet[1856]: I0113 20:48:18.371453 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24" Jan 13 20:48:18.375462 containerd[1467]: time="2025-01-13T20:48:18.375398877Z" level=info msg="StopPodSandbox for \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\"" Jan 13 20:48:18.376138 containerd[1467]: time="2025-01-13T20:48:18.375833443Z" level=info msg="Ensure that sandbox c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24 in task-service has been cleanup successfully" Jan 13 20:48:18.381092 containerd[1467]: time="2025-01-13T20:48:18.380359347Z" level=info msg="TearDown network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\" successfully" Jan 13 20:48:18.381092 containerd[1467]: time="2025-01-13T20:48:18.380417234Z" level=info msg="StopPodSandbox for \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\" returns successfully" Jan 13 20:48:18.381556 containerd[1467]: time="2025-01-13T20:48:18.381504621Z" level=info msg="StopPodSandbox for \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\"" Jan 13 20:48:18.381679 containerd[1467]: time="2025-01-13T20:48:18.381657197Z" level=info msg="TearDown network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" successfully" Jan 13 20:48:18.381750 containerd[1467]: time="2025-01-13T20:48:18.381686847Z" level=info msg="StopPodSandbox for \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" returns successfully" Jan 13 20:48:18.385610 containerd[1467]: time="2025-01-13T20:48:18.385094532Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\"" Jan 13 20:48:18.385610 containerd[1467]: time="2025-01-13T20:48:18.385350390Z" level=info msg="TearDown network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" successfully" Jan 13 20:48:18.385610 containerd[1467]: time="2025-01-13T20:48:18.385408829Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" returns successfully" Jan 13 20:48:18.386452 systemd[1]: run-netns-cni\x2dc8fe70db\x2dae9d\x2d5968\x2da595\x2d91151ba76c61.mount: Deactivated successfully. Jan 13 20:48:18.389774 containerd[1467]: time="2025-01-13T20:48:18.388465556Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" Jan 13 20:48:18.389774 containerd[1467]: time="2025-01-13T20:48:18.388720893Z" level=info msg="TearDown network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" successfully" Jan 13 20:48:18.389774 containerd[1467]: time="2025-01-13T20:48:18.388769896Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" returns successfully" Jan 13 20:48:18.390372 kubelet[1856]: I0113 20:48:18.387948 1856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6" Jan 13 20:48:18.391277 containerd[1467]: time="2025-01-13T20:48:18.390550583Z" level=info msg="StopPodSandbox for \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\"" Jan 13 20:48:18.391277 containerd[1467]: time="2025-01-13T20:48:18.391252347Z" level=info msg="Ensure that sandbox c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6 in task-service has been cleanup successfully" Jan 13 20:48:18.394363 containerd[1467]: time="2025-01-13T20:48:18.392668951Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:18.394670 containerd[1467]: time="2025-01-13T20:48:18.394091862Z" level=info msg="TearDown network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\" successfully" Jan 13 20:48:18.395029 containerd[1467]: time="2025-01-13T20:48:18.394825102Z" level=info msg="StopPodSandbox for \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\" returns successfully" Jan 13 20:48:18.398048 containerd[1467]: time="2025-01-13T20:48:18.397663815Z" level=info msg="TearDown network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" successfully" Jan 13 20:48:18.398048 containerd[1467]: time="2025-01-13T20:48:18.397856701Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" returns successfully" Jan 13 20:48:18.398924 containerd[1467]: time="2025-01-13T20:48:18.398740291Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:18.400085 containerd[1467]: time="2025-01-13T20:48:18.399592826Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:18.400085 containerd[1467]: time="2025-01-13T20:48:18.399665905Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:18.399880 systemd[1]: run-netns-cni\x2d81d63b04\x2df23a\x2d933a\x2dc519\x2dd5995fd2b9a5.mount: Deactivated successfully. Jan 13 20:48:18.400767 containerd[1467]: time="2025-01-13T20:48:18.400471974Z" level=info msg="StopPodSandbox for \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\"" Jan 13 20:48:18.402803 containerd[1467]: time="2025-01-13T20:48:18.401052166Z" level=info msg="TearDown network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" successfully" Jan 13 20:48:18.402803 containerd[1467]: time="2025-01-13T20:48:18.401487043Z" level=info msg="StopPodSandbox for \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" returns successfully" Jan 13 20:48:18.404333 containerd[1467]: time="2025-01-13T20:48:18.403557700Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\"" Jan 13 20:48:18.404333 containerd[1467]: time="2025-01-13T20:48:18.403855984Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:18.404333 containerd[1467]: time="2025-01-13T20:48:18.404227758Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:18.404333 containerd[1467]: time="2025-01-13T20:48:18.404267015Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:18.404622 containerd[1467]: time="2025-01-13T20:48:18.403860587Z" level=info msg="TearDown network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" successfully" Jan 13 20:48:18.404622 containerd[1467]: time="2025-01-13T20:48:18.404381879Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" returns successfully" Jan 13 20:48:18.406897 containerd[1467]: time="2025-01-13T20:48:18.406190521Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:18.406897 containerd[1467]: time="2025-01-13T20:48:18.406395440Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:18.406897 containerd[1467]: time="2025-01-13T20:48:18.406426074Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:18.406897 containerd[1467]: time="2025-01-13T20:48:18.406421320Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" Jan 13 20:48:18.407617 containerd[1467]: time="2025-01-13T20:48:18.406835701Z" level=info msg="TearDown network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" successfully" Jan 13 20:48:18.407617 containerd[1467]: time="2025-01-13T20:48:18.407369688Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" returns successfully" Jan 13 20:48:18.408019 containerd[1467]: time="2025-01-13T20:48:18.407861890Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:18.409514 containerd[1467]: time="2025-01-13T20:48:18.408525811Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:18.409911 containerd[1467]: time="2025-01-13T20:48:18.409472784Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:18.410553 containerd[1467]: time="2025-01-13T20:48:18.408871343Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:18.410553 containerd[1467]: time="2025-01-13T20:48:18.410341884Z" level=info msg="TearDown network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" successfully" Jan 13 20:48:18.410553 containerd[1467]: time="2025-01-13T20:48:18.410373991Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" returns successfully" Jan 13 20:48:18.411932 containerd[1467]: time="2025-01-13T20:48:18.411470593Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:18.411932 containerd[1467]: time="2025-01-13T20:48:18.411627541Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:18.411932 containerd[1467]: time="2025-01-13T20:48:18.411656640Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:18.412863 containerd[1467]: time="2025-01-13T20:48:18.412786883Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:18.413559 containerd[1467]: time="2025-01-13T20:48:18.413380202Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:18.413559 containerd[1467]: time="2025-01-13T20:48:18.413459417Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:18.413559 containerd[1467]: time="2025-01-13T20:48:18.413487434Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:18.414011 containerd[1467]: time="2025-01-13T20:48:18.413663574Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:18.415453 containerd[1467]: time="2025-01-13T20:48:18.413690237Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:18.416356 containerd[1467]: time="2025-01-13T20:48:18.414692139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:11,}" Jan 13 20:48:18.419353 containerd[1467]: time="2025-01-13T20:48:18.419280795Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:18.420218 containerd[1467]: time="2025-01-13T20:48:18.420062968Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:18.420218 containerd[1467]: time="2025-01-13T20:48:18.420158038Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:18.423400 containerd[1467]: time="2025-01-13T20:48:18.422521605Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:18.423400 containerd[1467]: time="2025-01-13T20:48:18.423242140Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:18.423400 containerd[1467]: time="2025-01-13T20:48:18.423276243Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:18.424879 containerd[1467]: time="2025-01-13T20:48:18.424806406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:8,}" Jan 13 20:48:18.890034 systemd-networkd[1353]: cali119f2e4c7a1: Link UP Jan 13 20:48:18.890234 systemd-networkd[1353]: cali119f2e4c7a1: Gained carrier Jan 13 20:48:18.909232 kubelet[1856]: I0113 20:48:18.907429 1856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-72h9d" podStartSLOduration=5.123939383 podStartE2EDuration="27.907406927s" podCreationTimestamp="2025-01-13 20:47:51 +0000 UTC" firstStartedPulling="2025-01-13 20:47:54.436608927 +0000 UTC m=+4.711233230" lastFinishedPulling="2025-01-13 20:48:17.220076422 +0000 UTC m=+27.494700774" observedRunningTime="2025-01-13 20:48:18.527450488 +0000 UTC m=+28.802074840" watchObservedRunningTime="2025-01-13 20:48:18.907406927 +0000 UTC m=+29.182031230" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.677 [INFO][3019] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.702 [INFO][3019] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0 nginx-deployment-8587fbcb89- default bdb79565-9ed0-4d93-bc64-5bbf39492302 1176 0 2025-01-13 20:48:10 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.85 nginx-deployment-8587fbcb89-94b4b eth0 default [] [] [kns.default ksa.default.default] cali119f2e4c7a1 [] []}} ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.702 [INFO][3019] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.740 [INFO][3033] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" HandleID="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Workload="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.769 [INFO][3033] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" HandleID="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Workload="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319070), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.85", "pod":"nginx-deployment-8587fbcb89-94b4b", "timestamp":"2025-01-13 20:48:18.740920686 +0000 UTC"}, Hostname:"172.24.4.85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.769 [INFO][3033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.769 [INFO][3033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.769 [INFO][3033] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.85' Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.779 [INFO][3033] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.788 [INFO][3033] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.801 [INFO][3033] ipam/ipam.go 489: Trying affinity for 192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.808 [INFO][3033] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.814 [INFO][3033] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.814 [INFO][3033] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.817 [INFO][3033] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.828 [INFO][3033] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.856 [INFO][3033] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.1/26] block=192.168.10.0/26 handle="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.856 [INFO][3033] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.1/26] handle="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" host="172.24.4.85" Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.857 [INFO][3033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:48:18.909385 containerd[1467]: 2025-01-13 20:48:18.858 [INFO][3033] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.1/26] IPv6=[] ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" HandleID="k8s-pod-network.0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Workload="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" Jan 13 20:48:18.909979 containerd[1467]: 2025-01-13 20:48:18.865 [INFO][3019] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"bdb79565-9ed0-4d93-bc64-5bbf39492302", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 48, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-94b4b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali119f2e4c7a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:48:18.909979 containerd[1467]: 2025-01-13 20:48:18.865 [INFO][3019] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.1/32] ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" Jan 13 20:48:18.909979 containerd[1467]: 2025-01-13 20:48:18.865 [INFO][3019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali119f2e4c7a1 ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" Jan 13 20:48:18.909979 containerd[1467]: 2025-01-13 20:48:18.888 [INFO][3019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" Jan 13 20:48:18.909979 containerd[1467]: 2025-01-13 20:48:18.888 [INFO][3019] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"bdb79565-9ed0-4d93-bc64-5bbf39492302", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 48, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e", Pod:"nginx-deployment-8587fbcb89-94b4b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali119f2e4c7a1", MAC:"e2:b3:aa:5e:8a:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:48:18.909979 containerd[1467]: 2025-01-13 20:48:18.907 [INFO][3019] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e" Namespace="default" Pod="nginx-deployment-8587fbcb89-94b4b" WorkloadEndpoint="172.24.4.85-k8s-nginx--deployment--8587fbcb89--94b4b-eth0" Jan 13 20:48:18.959054 systemd-networkd[1353]: calieacdd5ebb4e: Link UP Jan 13 20:48:18.960273 systemd-networkd[1353]: calieacdd5ebb4e: Gained carrier Jan 13 20:48:18.968260 kubelet[1856]: E0113 20:48:18.968220 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:18.997152 containerd[1467]: time="2025-01-13T20:48:18.996832736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:48:18.997152 containerd[1467]: time="2025-01-13T20:48:18.996923032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:48:18.997152 containerd[1467]: time="2025-01-13T20:48:18.996942065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:18.998696 containerd[1467]: time="2025-01-13T20:48:18.998624011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.668 [INFO][3009] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.704 [INFO][3009] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.85-k8s-csi--node--driver--jl89x-eth0 csi-node-driver- calico-system 1d32a904-00d3-418b-b6e0-17302fc19462 1078 0 2025-01-13 20:47:51 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.85 csi-node-driver-jl89x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieacdd5ebb4e [] []}} ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.704 [INFO][3009] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.769 [INFO][3037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" HandleID="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Workload="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.808 [INFO][3037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" HandleID="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Workload="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031f0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.85", "pod":"csi-node-driver-jl89x", "timestamp":"2025-01-13 20:48:18.769555969 +0000 UTC"}, Hostname:"172.24.4.85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.809 [INFO][3037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.857 [INFO][3037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.857 [INFO][3037] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.85' Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.878 [INFO][3037] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.888 [INFO][3037] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.910 [INFO][3037] ipam/ipam.go 489: Trying affinity for 192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.916 [INFO][3037] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.919 [INFO][3037] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.919 [INFO][3037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.922 [INFO][3037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77 Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.929 [INFO][3037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.949 [INFO][3037] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.2/26] block=192.168.10.0/26 handle="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.949 [INFO][3037] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.2/26] handle="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" host="172.24.4.85" Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.949 [INFO][3037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:48:19.007077 containerd[1467]: 2025-01-13 20:48:18.949 [INFO][3037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.2/26] IPv6=[] ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" HandleID="k8s-pod-network.b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Workload="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" Jan 13 20:48:19.007951 containerd[1467]: 2025-01-13 20:48:18.951 [INFO][3009] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-csi--node--driver--jl89x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d32a904-00d3-418b-b6e0-17302fc19462", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 47, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"", Pod:"csi-node-driver-jl89x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieacdd5ebb4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:48:19.007951 containerd[1467]: 2025-01-13 20:48:18.951 [INFO][3009] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.2/32] ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" Jan 13 20:48:19.007951 containerd[1467]: 2025-01-13 20:48:18.952 [INFO][3009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieacdd5ebb4e ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" Jan 13 20:48:19.007951 containerd[1467]: 2025-01-13 20:48:18.960 [INFO][3009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" Jan 13 20:48:19.007951 containerd[1467]: 2025-01-13 20:48:18.961 [INFO][3009] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-csi--node--driver--jl89x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d32a904-00d3-418b-b6e0-17302fc19462", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 47, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77", Pod:"csi-node-driver-jl89x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieacdd5ebb4e", MAC:"a2:fc:ae:8d:89:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:48:19.007951 containerd[1467]: 2025-01-13 20:48:19.003 [INFO][3009] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77" Namespace="calico-system" Pod="csi-node-driver-jl89x" WorkloadEndpoint="172.24.4.85-k8s-csi--node--driver--jl89x-eth0" Jan 13 20:48:19.037438 systemd[1]: Started cri-containerd-0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e.scope - libcontainer container 0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e. Jan 13 20:48:19.057324 containerd[1467]: time="2025-01-13T20:48:19.056574312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:48:19.057324 containerd[1467]: time="2025-01-13T20:48:19.056641168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:48:19.057324 containerd[1467]: time="2025-01-13T20:48:19.056660628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:19.057324 containerd[1467]: time="2025-01-13T20:48:19.056739745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:19.089127 systemd[1]: Started cri-containerd-b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77.scope - libcontainer container b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77. Jan 13 20:48:19.120033 containerd[1467]: time="2025-01-13T20:48:19.118861191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-94b4b,Uid:bdb79565-9ed0-4d93-bc64-5bbf39492302,Namespace:default,Attempt:8,} returns sandbox id \"0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e\"" Jan 13 20:48:19.124029 containerd[1467]: time="2025-01-13T20:48:19.123938729Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:48:19.152084 containerd[1467]: time="2025-01-13T20:48:19.151672191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl89x,Uid:1d32a904-00d3-418b-b6e0-17302fc19462,Namespace:calico-system,Attempt:11,} returns sandbox id \"b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77\"" Jan 13 20:48:19.249209 kernel: bpftool[3263]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:48:19.566481 systemd-networkd[1353]: vxlan.calico: Link UP Jan 13 20:48:19.566489 systemd-networkd[1353]: vxlan.calico: Gained carrier Jan 13 20:48:19.969323 kubelet[1856]: E0113 20:48:19.969186 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:20.273429 systemd-networkd[1353]: cali119f2e4c7a1: Gained IPv6LL Jan 13 20:48:20.529866 systemd-networkd[1353]: calieacdd5ebb4e: Gained IPv6LL Jan 13 20:48:20.970062 kubelet[1856]: E0113 20:48:20.969998 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:21.299247 systemd-networkd[1353]: vxlan.calico: Gained IPv6LL Jan 13 20:48:21.970576 kubelet[1856]: E0113 20:48:21.970499 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:22.971087 kubelet[1856]: E0113 20:48:22.970917 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:23.971269 kubelet[1856]: E0113 20:48:23.971233 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:24.216641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763574211.mount: Deactivated successfully. Jan 13 20:48:24.972391 kubelet[1856]: E0113 20:48:24.972133 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:25.592616 containerd[1467]: time="2025-01-13T20:48:25.592289683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:25.594503 containerd[1467]: time="2025-01-13T20:48:25.593594483Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:48:25.596460 containerd[1467]: time="2025-01-13T20:48:25.596401260Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:25.600631 containerd[1467]: time="2025-01-13T20:48:25.600558739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:25.603929 containerd[1467]: time="2025-01-13T20:48:25.603393855Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 6.479347858s" Jan 13 20:48:25.603929 containerd[1467]: time="2025-01-13T20:48:25.603487105Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:48:25.605344 containerd[1467]: time="2025-01-13T20:48:25.605317552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:48:25.606011 containerd[1467]: time="2025-01-13T20:48:25.605880481Z" level=info msg="CreateContainer within sandbox \"0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:48:25.631700 containerd[1467]: time="2025-01-13T20:48:25.631592399Z" level=info msg="CreateContainer within sandbox \"0bf0dce649f07552bdd687455c17ba896f99deb21930b7e71c3032bf519ba38e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"edf7fd7dedd8c6262d56fb83857b9563ccc16a9524a2a01dab851bdb91a8376f\"" Jan 13 20:48:25.632856 containerd[1467]: time="2025-01-13T20:48:25.632778065Z" level=info msg="StartContainer for \"edf7fd7dedd8c6262d56fb83857b9563ccc16a9524a2a01dab851bdb91a8376f\"" Jan 13 20:48:25.679685 systemd[1]: run-containerd-runc-k8s.io-edf7fd7dedd8c6262d56fb83857b9563ccc16a9524a2a01dab851bdb91a8376f-runc.O7rAQs.mount: Deactivated successfully. Jan 13 20:48:25.687136 systemd[1]: Started cri-containerd-edf7fd7dedd8c6262d56fb83857b9563ccc16a9524a2a01dab851bdb91a8376f.scope - libcontainer container edf7fd7dedd8c6262d56fb83857b9563ccc16a9524a2a01dab851bdb91a8376f. Jan 13 20:48:25.717056 containerd[1467]: time="2025-01-13T20:48:25.716998282Z" level=info msg="StartContainer for \"edf7fd7dedd8c6262d56fb83857b9563ccc16a9524a2a01dab851bdb91a8376f\" returns successfully" Jan 13 20:48:25.973280 kubelet[1856]: E0113 20:48:25.973079 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:26.973555 kubelet[1856]: E0113 20:48:26.973455 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:27.668948 containerd[1467]: time="2025-01-13T20:48:27.668760638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:27.670150 containerd[1467]: time="2025-01-13T20:48:27.670102155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:48:27.671792 containerd[1467]: time="2025-01-13T20:48:27.671716498Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:27.674604 containerd[1467]: time="2025-01-13T20:48:27.674565231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:27.675576 containerd[1467]: time="2025-01-13T20:48:27.675268282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.069300156s" Jan 13 20:48:27.675576 containerd[1467]: time="2025-01-13T20:48:27.675303918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:48:27.679025 containerd[1467]: time="2025-01-13T20:48:27.677849761Z" level=info msg="CreateContainer within sandbox \"b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:48:27.715430 containerd[1467]: time="2025-01-13T20:48:27.715394813Z" level=info msg="CreateContainer within sandbox \"b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5283663249e00b3ab917a596d785b4ea25d09ad9a422eeb095cb725c4a3c1e5b\"" Jan 13 20:48:27.716262 containerd[1467]: time="2025-01-13T20:48:27.716196812Z" level=info msg="StartContainer for \"5283663249e00b3ab917a596d785b4ea25d09ad9a422eeb095cb725c4a3c1e5b\"" Jan 13 20:48:27.759132 systemd[1]: Started cri-containerd-5283663249e00b3ab917a596d785b4ea25d09ad9a422eeb095cb725c4a3c1e5b.scope - libcontainer container 5283663249e00b3ab917a596d785b4ea25d09ad9a422eeb095cb725c4a3c1e5b. Jan 13 20:48:27.803276 containerd[1467]: time="2025-01-13T20:48:27.803126304Z" level=info msg="StartContainer for \"5283663249e00b3ab917a596d785b4ea25d09ad9a422eeb095cb725c4a3c1e5b\" returns successfully" Jan 13 20:48:27.804089 containerd[1467]: time="2025-01-13T20:48:27.804071237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:48:27.974708 kubelet[1856]: E0113 20:48:27.974466 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:28.975184 kubelet[1856]: E0113 20:48:28.975083 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:29.930601 containerd[1467]: time="2025-01-13T20:48:29.930535870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:29.931987 containerd[1467]: time="2025-01-13T20:48:29.931799450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:48:29.933474 containerd[1467]: time="2025-01-13T20:48:29.933435055Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:29.936990 containerd[1467]: time="2025-01-13T20:48:29.936579597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:29.937712 containerd[1467]: time="2025-01-13T20:48:29.937224703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.132995557s" Jan 13 20:48:29.937712 containerd[1467]: time="2025-01-13T20:48:29.937255045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:48:29.939273 containerd[1467]: time="2025-01-13T20:48:29.939093998Z" level=info msg="CreateContainer within sandbox \"b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:48:29.962019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193680718.mount: Deactivated successfully. Jan 13 20:48:29.966366 containerd[1467]: time="2025-01-13T20:48:29.966319788Z" level=info msg="CreateContainer within sandbox \"b5f171c2a2571446b59727a0048b8d1c41a28e06a0ef00fc4e5493fe2f4fef77\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c5be49de6549033ad409722bf25f4e7371d8295f2373a216583481ee53fd3fb1\"" Jan 13 20:48:29.966935 containerd[1467]: time="2025-01-13T20:48:29.966899341Z" level=info msg="StartContainer for \"c5be49de6549033ad409722bf25f4e7371d8295f2373a216583481ee53fd3fb1\"" Jan 13 20:48:29.975234 kubelet[1856]: E0113 20:48:29.975213 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:30.002134 systemd[1]: Started cri-containerd-c5be49de6549033ad409722bf25f4e7371d8295f2373a216583481ee53fd3fb1.scope - libcontainer container c5be49de6549033ad409722bf25f4e7371d8295f2373a216583481ee53fd3fb1. Jan 13 20:48:30.046874 containerd[1467]: time="2025-01-13T20:48:30.046826299Z" level=info msg="StartContainer for \"c5be49de6549033ad409722bf25f4e7371d8295f2373a216583481ee53fd3fb1\" returns successfully" Jan 13 20:48:30.079248 kubelet[1856]: I0113 20:48:30.079204 1856 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:48:30.079434 kubelet[1856]: I0113 20:48:30.079368 1856 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:48:30.587077 kubelet[1856]: I0113 20:48:30.586834 1856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jl89x" podStartSLOduration=28.804060213 podStartE2EDuration="39.586802436s" podCreationTimestamp="2025-01-13 20:47:51 +0000 UTC" firstStartedPulling="2025-01-13 20:48:19.155235732 +0000 UTC m=+29.429860044" lastFinishedPulling="2025-01-13 20:48:29.937977964 +0000 UTC m=+40.212602267" observedRunningTime="2025-01-13 20:48:30.586504151 +0000 UTC m=+40.861128504" watchObservedRunningTime="2025-01-13 20:48:30.586802436 +0000 UTC m=+40.861426788" Jan 13 20:48:30.587481 kubelet[1856]: I0113 20:48:30.587184 1856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-94b4b" podStartSLOduration=14.105653088 podStartE2EDuration="20.587168336s" podCreationTimestamp="2025-01-13 20:48:10 +0000 UTC" firstStartedPulling="2025-01-13 20:48:19.123104914 +0000 UTC m=+29.397729216" lastFinishedPulling="2025-01-13 20:48:25.604620152 +0000 UTC m=+35.879244464" observedRunningTime="2025-01-13 20:48:26.553199015 +0000 UTC m=+36.827823367" watchObservedRunningTime="2025-01-13 20:48:30.587168336 +0000 UTC m=+40.861792689" Jan 13 20:48:30.939032 kubelet[1856]: E0113 20:48:30.938256 1856 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:30.976616 kubelet[1856]: E0113 20:48:30.976548 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:31.977355 kubelet[1856]: E0113 20:48:31.977249 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:32.977990 kubelet[1856]: E0113 20:48:32.977896 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:33.978641 kubelet[1856]: E0113 20:48:33.978568 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:34.979785 kubelet[1856]: E0113 20:48:34.979682 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:35.980289 kubelet[1856]: E0113 20:48:35.980208 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:36.981002 kubelet[1856]: E0113 20:48:36.980813 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:37.981193 kubelet[1856]: E0113 20:48:37.981101 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:38.982320 kubelet[1856]: E0113 20:48:38.982151 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:39.787448 systemd[1]: Created slice kubepods-besteffort-pod06739840_d31a_4a6a_a952_a571ce58cafb.slice - libcontainer container kubepods-besteffort-pod06739840_d31a_4a6a_a952_a571ce58cafb.slice. Jan 13 20:48:39.826359 kubelet[1856]: I0113 20:48:39.826183 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/06739840-d31a-4a6a-a952-a571ce58cafb-data\") pod \"nfs-server-provisioner-0\" (UID: \"06739840-d31a-4a6a-a952-a571ce58cafb\") " pod="default/nfs-server-provisioner-0" Jan 13 20:48:39.826359 kubelet[1856]: I0113 20:48:39.826286 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktjpr\" (UniqueName: \"kubernetes.io/projected/06739840-d31a-4a6a-a952-a571ce58cafb-kube-api-access-ktjpr\") pod \"nfs-server-provisioner-0\" (UID: \"06739840-d31a-4a6a-a952-a571ce58cafb\") " pod="default/nfs-server-provisioner-0" Jan 13 20:48:39.983167 kubelet[1856]: E0113 20:48:39.982732 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:40.094590 containerd[1467]: time="2025-01-13T20:48:40.094474193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:06739840-d31a-4a6a-a952-a571ce58cafb,Namespace:default,Attempt:0,}" Jan 13 20:48:40.325362 systemd-networkd[1353]: cali60e51b789ff: Link UP Jan 13 20:48:40.328214 systemd-networkd[1353]: cali60e51b789ff: Gained carrier Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.196 [INFO][3578] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.85-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 06739840-d31a-4a6a-a952-a571ce58cafb 1325 0 2025-01-13 20:48:39 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.85 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.197 [INFO][3578] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.253 [INFO][3588] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" HandleID="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Workload="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.270 [INFO][3588] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" HandleID="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Workload="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a16e0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.85", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:48:40.253878061 +0000 UTC"}, Hostname:"172.24.4.85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.270 [INFO][3588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.271 [INFO][3588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.271 [INFO][3588] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.85' Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.275 [INFO][3588] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.284 [INFO][3588] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.291 [INFO][3588] ipam/ipam.go 489: Trying affinity for 192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.294 [INFO][3588] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.298 [INFO][3588] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.298 [INFO][3588] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.300 [INFO][3588] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489 Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.305 [INFO][3588] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.316 [INFO][3588] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.3/26] block=192.168.10.0/26 handle="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.317 [INFO][3588] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.3/26] handle="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" host="172.24.4.85" Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.317 [INFO][3588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:48:40.350053 containerd[1467]: 2025-01-13 20:48:40.317 [INFO][3588] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.3/26] IPv6=[] ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" HandleID="k8s-pod-network.3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Workload="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:48:40.351502 containerd[1467]: 2025-01-13 20:48:40.320 [INFO][3578] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"06739840-d31a-4a6a-a952-a571ce58cafb", ResourceVersion:"1325", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 48, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:48:40.351502 containerd[1467]: 2025-01-13 20:48:40.320 [INFO][3578] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.3/32] ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:48:40.351502 containerd[1467]: 2025-01-13 20:48:40.320 [INFO][3578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:48:40.351502 containerd[1467]: 2025-01-13 20:48:40.328 [INFO][3578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:48:40.351914 containerd[1467]: 2025-01-13 20:48:40.328 [INFO][3578] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"06739840-d31a-4a6a-a952-a571ce58cafb", ResourceVersion:"1325", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 48, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f2:fc:ab:24:50:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:48:40.351914 containerd[1467]: 2025-01-13 20:48:40.341 [INFO][3578] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.85-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:48:40.403396 containerd[1467]: time="2025-01-13T20:48:40.403242755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:48:40.403396 containerd[1467]: time="2025-01-13T20:48:40.403317396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:48:40.403396 containerd[1467]: time="2025-01-13T20:48:40.403332358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:40.403931 containerd[1467]: time="2025-01-13T20:48:40.403416960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:40.429127 systemd[1]: Started cri-containerd-3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489.scope - libcontainer container 3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489. Jan 13 20:48:40.468805 containerd[1467]: time="2025-01-13T20:48:40.468735540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:06739840-d31a-4a6a-a952-a571ce58cafb,Namespace:default,Attempt:0,} returns sandbox id \"3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489\"" Jan 13 20:48:40.470567 containerd[1467]: time="2025-01-13T20:48:40.470541341Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:48:40.983917 kubelet[1856]: E0113 20:48:40.983819 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:41.984343 kubelet[1856]: E0113 20:48:41.984305 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:42.289417 systemd-networkd[1353]: cali60e51b789ff: Gained IPv6LL Jan 13 20:48:42.985366 kubelet[1856]: E0113 20:48:42.985295 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:43.729683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279545647.mount: Deactivated successfully. Jan 13 20:48:43.985541 kubelet[1856]: E0113 20:48:43.985428 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:44.986203 kubelet[1856]: E0113 20:48:44.986160 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:45.913697 containerd[1467]: time="2025-01-13T20:48:45.913540495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:45.914829 containerd[1467]: time="2025-01-13T20:48:45.914775593Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 13 20:48:45.916241 containerd[1467]: time="2025-01-13T20:48:45.916194225Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:45.920045 containerd[1467]: time="2025-01-13T20:48:45.920004728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:48:45.922289 containerd[1467]: time="2025-01-13T20:48:45.922199355Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.451595329s" Jan 13 20:48:45.922289 containerd[1467]: time="2025-01-13T20:48:45.922228236Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:48:45.924122 containerd[1467]: time="2025-01-13T20:48:45.924096639Z" level=info msg="CreateContainer within sandbox \"3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:48:45.945250 containerd[1467]: time="2025-01-13T20:48:45.945196635Z" level=info msg="CreateContainer within sandbox \"3d8a2edf7f5e1643ae8b3c1f591c24a32473c112e43badcfc5859385bc057489\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b1cf0628f63aca4e396a410d9f4c96c13127921cac28fdbeb51d184b6b8cfb6e\"" Jan 13 20:48:45.945776 containerd[1467]: time="2025-01-13T20:48:45.945657410Z" level=info msg="StartContainer for \"b1cf0628f63aca4e396a410d9f4c96c13127921cac28fdbeb51d184b6b8cfb6e\"" Jan 13 20:48:45.974500 systemd[1]: run-containerd-runc-k8s.io-b1cf0628f63aca4e396a410d9f4c96c13127921cac28fdbeb51d184b6b8cfb6e-runc.PzEhbv.mount: Deactivated successfully. Jan 13 20:48:45.984097 systemd[1]: Started cri-containerd-b1cf0628f63aca4e396a410d9f4c96c13127921cac28fdbeb51d184b6b8cfb6e.scope - libcontainer container b1cf0628f63aca4e396a410d9f4c96c13127921cac28fdbeb51d184b6b8cfb6e. Jan 13 20:48:45.987851 kubelet[1856]: E0113 20:48:45.987147 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:46.012264 containerd[1467]: time="2025-01-13T20:48:46.012150234Z" level=info msg="StartContainer for \"b1cf0628f63aca4e396a410d9f4c96c13127921cac28fdbeb51d184b6b8cfb6e\" returns successfully" Jan 13 20:48:46.690718 kubelet[1856]: I0113 20:48:46.690583 1856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.2377567369999998 podStartE2EDuration="7.690548864s" podCreationTimestamp="2025-01-13 20:48:39 +0000 UTC" firstStartedPulling="2025-01-13 20:48:40.470035644 +0000 UTC m=+50.744659946" lastFinishedPulling="2025-01-13 20:48:45.922827771 +0000 UTC m=+56.197452073" observedRunningTime="2025-01-13 20:48:46.690196579 +0000 UTC m=+56.964820941" watchObservedRunningTime="2025-01-13 20:48:46.690548864 +0000 UTC m=+56.965173216" Jan 13 20:48:46.988544 kubelet[1856]: E0113 20:48:46.988339 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:47.988629 kubelet[1856]: E0113 20:48:47.988556 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:48.989389 kubelet[1856]: E0113 20:48:48.989290 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:49.989670 kubelet[1856]: E0113 20:48:49.989544 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:50.938081 kubelet[1856]: E0113 20:48:50.937917 1856 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:50.970283 containerd[1467]: time="2025-01-13T20:48:50.970220228Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:50.971362 containerd[1467]: time="2025-01-13T20:48:50.971225915Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:50.971362 containerd[1467]: time="2025-01-13T20:48:50.971337825Z" level=info msg="StopPodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:50.972353 containerd[1467]: time="2025-01-13T20:48:50.972271285Z" level=info msg="RemovePodSandbox for \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:50.972353 containerd[1467]: time="2025-01-13T20:48:50.972317549Z" level=info msg="Forcibly stopping sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\"" Jan 13 20:48:50.972505 containerd[1467]: time="2025-01-13T20:48:50.972440051Z" level=info msg="TearDown network for sandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" successfully" Jan 13 20:48:50.990058 kubelet[1856]: E0113 20:48:50.990004 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:51.033912 containerd[1467]: time="2025-01-13T20:48:51.033800198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.033912 containerd[1467]: time="2025-01-13T20:48:51.033918379Z" level=info msg="RemovePodSandbox \"0662416eb27947d48dd80d2cbff5616e4fcd17a4d23ad1d63c883e46e9dbe54e\" returns successfully" Jan 13 20:48:51.035183 containerd[1467]: time="2025-01-13T20:48:51.034693989Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:51.035183 containerd[1467]: time="2025-01-13T20:48:51.034876162Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:51.035183 containerd[1467]: time="2025-01-13T20:48:51.034903929Z" level=info msg="StopPodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:51.036021 containerd[1467]: time="2025-01-13T20:48:51.035898227Z" level=info msg="RemovePodSandbox for \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:51.036142 containerd[1467]: time="2025-01-13T20:48:51.036061252Z" level=info msg="Forcibly stopping sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\"" Jan 13 20:48:51.036385 containerd[1467]: time="2025-01-13T20:48:51.036221680Z" level=info msg="TearDown network for sandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" successfully" Jan 13 20:48:51.041108 containerd[1467]: time="2025-01-13T20:48:51.041012549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.041108 containerd[1467]: time="2025-01-13T20:48:51.041101011Z" level=info msg="RemovePodSandbox \"3c0bf47c15cd5fd2ade19e9b8d3a2b65045194c325435106761061dc18360b14\" returns successfully" Jan 13 20:48:51.042027 containerd[1467]: time="2025-01-13T20:48:51.041690479Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:51.042027 containerd[1467]: time="2025-01-13T20:48:51.041861419Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:51.042027 containerd[1467]: time="2025-01-13T20:48:51.041889667Z" level=info msg="StopPodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:51.043479 containerd[1467]: time="2025-01-13T20:48:51.042565753Z" level=info msg="RemovePodSandbox for \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:51.043479 containerd[1467]: time="2025-01-13T20:48:51.042610615Z" level=info msg="Forcibly stopping sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\"" Jan 13 20:48:51.043479 containerd[1467]: time="2025-01-13T20:48:51.042731883Z" level=info msg="TearDown network for sandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" successfully" Jan 13 20:48:51.047933 containerd[1467]: time="2025-01-13T20:48:51.047805131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.048168 containerd[1467]: time="2025-01-13T20:48:51.047924396Z" level=info msg="RemovePodSandbox \"5521acc18242f8c16dc1d6405dafef4362049bb36a9f8d7831cad6d908102726\" returns successfully" Jan 13 20:48:51.049176 containerd[1467]: time="2025-01-13T20:48:51.048714766Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:51.049176 containerd[1467]: time="2025-01-13T20:48:51.048885356Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:51.049176 containerd[1467]: time="2025-01-13T20:48:51.048913523Z" level=info msg="StopPodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:51.050668 containerd[1467]: time="2025-01-13T20:48:51.050121960Z" level=info msg="RemovePodSandbox for \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:51.050668 containerd[1467]: time="2025-01-13T20:48:51.050205141Z" level=info msg="Forcibly stopping sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\"" Jan 13 20:48:51.050668 containerd[1467]: time="2025-01-13T20:48:51.050482860Z" level=info msg="TearDown network for sandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" successfully" Jan 13 20:48:51.056809 containerd[1467]: time="2025-01-13T20:48:51.056709251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.056952 containerd[1467]: time="2025-01-13T20:48:51.056816421Z" level=info msg="RemovePodSandbox \"e901a9f267b5ebd23a46cd045d132b3f9b1e75e7c531977eec6123df99a35946\" returns successfully" Jan 13 20:48:51.057943 containerd[1467]: time="2025-01-13T20:48:51.057636853Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:51.057943 containerd[1467]: time="2025-01-13T20:48:51.057801931Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:51.057943 containerd[1467]: time="2025-01-13T20:48:51.057828816Z" level=info msg="StopPodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:51.058802 containerd[1467]: time="2025-01-13T20:48:51.058720373Z" level=info msg="RemovePodSandbox for \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:51.058802 containerd[1467]: time="2025-01-13T20:48:51.058789836Z" level=info msg="Forcibly stopping sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\"" Jan 13 20:48:51.059103 containerd[1467]: time="2025-01-13T20:48:51.059004044Z" level=info msg="TearDown network for sandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" successfully" Jan 13 20:48:51.064489 containerd[1467]: time="2025-01-13T20:48:51.063937917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.064489 containerd[1467]: time="2025-01-13T20:48:51.064294879Z" level=info msg="RemovePodSandbox \"af5e68f2156ad5a3de47608d2af0b344713d133ef804a6a8aa4493a716786ef8\" returns successfully" Jan 13 20:48:51.067120 containerd[1467]: time="2025-01-13T20:48:51.065154110Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:51.067120 containerd[1467]: time="2025-01-13T20:48:51.065388270Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:51.067120 containerd[1467]: time="2025-01-13T20:48:51.065420296Z" level=info msg="StopPodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:51.067120 containerd[1467]: time="2025-01-13T20:48:51.066146013Z" level=info msg="RemovePodSandbox for \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:51.067120 containerd[1467]: time="2025-01-13T20:48:51.066201507Z" level=info msg="Forcibly stopping sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\"" Jan 13 20:48:51.067120 containerd[1467]: time="2025-01-13T20:48:51.066345862Z" level=info msg="TearDown network for sandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" successfully" Jan 13 20:48:51.072563 containerd[1467]: time="2025-01-13T20:48:51.072504275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.073466 containerd[1467]: time="2025-01-13T20:48:51.072829742Z" level=info msg="RemovePodSandbox \"d0721fd0ccd75d217ff04334739ace027764615ba402263c0cf07fc8703f2692\" returns successfully" Jan 13 20:48:51.074475 containerd[1467]: time="2025-01-13T20:48:51.073946180Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:51.074475 containerd[1467]: time="2025-01-13T20:48:51.074160169Z" level=info msg="TearDown network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" successfully" Jan 13 20:48:51.074475 containerd[1467]: time="2025-01-13T20:48:51.074188928Z" level=info msg="StopPodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" returns successfully" Jan 13 20:48:51.078307 containerd[1467]: time="2025-01-13T20:48:51.078035421Z" level=info msg="RemovePodSandbox for \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:51.078307 containerd[1467]: time="2025-01-13T20:48:51.078127019Z" level=info msg="Forcibly stopping sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\"" Jan 13 20:48:51.078535 containerd[1467]: time="2025-01-13T20:48:51.078357111Z" level=info msg="TearDown network for sandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" successfully" Jan 13 20:48:51.089334 containerd[1467]: time="2025-01-13T20:48:51.088827811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.089334 containerd[1467]: time="2025-01-13T20:48:51.088925231Z" level=info msg="RemovePodSandbox \"b21b28dc1e15be95f7f801604677b085a61d0eef25d3dadc9281e7f73d4f204c\" returns successfully" Jan 13 20:48:51.089609 containerd[1467]: time="2025-01-13T20:48:51.089504177Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" Jan 13 20:48:51.090105 containerd[1467]: time="2025-01-13T20:48:51.089672573Z" level=info msg="TearDown network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" successfully" Jan 13 20:48:51.090105 containerd[1467]: time="2025-01-13T20:48:51.089712474Z" level=info msg="StopPodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" returns successfully" Jan 13 20:48:51.091216 containerd[1467]: time="2025-01-13T20:48:51.090799342Z" level=info msg="RemovePodSandbox for \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" Jan 13 20:48:51.091216 containerd[1467]: time="2025-01-13T20:48:51.090852621Z" level=info msg="Forcibly stopping sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\"" Jan 13 20:48:51.093074 containerd[1467]: time="2025-01-13T20:48:51.091516041Z" level=info msg="TearDown network for sandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" successfully" Jan 13 20:48:51.103552 containerd[1467]: time="2025-01-13T20:48:51.103453969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.103552 containerd[1467]: time="2025-01-13T20:48:51.103539474Z" level=info msg="RemovePodSandbox \"ea2f2e3487983c519b4dc2a1fcd188e64299ccbe78e9297d5d9918e73d98cbf5\" returns successfully" Jan 13 20:48:51.104538 containerd[1467]: time="2025-01-13T20:48:51.104498299Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\"" Jan 13 20:48:51.105089 containerd[1467]: time="2025-01-13T20:48:51.104885503Z" level=info msg="TearDown network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" successfully" Jan 13 20:48:51.105089 containerd[1467]: time="2025-01-13T20:48:51.104925745Z" level=info msg="StopPodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" returns successfully" Jan 13 20:48:51.105833 containerd[1467]: time="2025-01-13T20:48:51.105682847Z" level=info msg="RemovePodSandbox for \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\"" Jan 13 20:48:51.105833 containerd[1467]: time="2025-01-13T20:48:51.105734793Z" level=info msg="Forcibly stopping sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\"" Jan 13 20:48:51.106098 containerd[1467]: time="2025-01-13T20:48:51.105857014Z" level=info msg="TearDown network for sandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" successfully" Jan 13 20:48:51.110649 containerd[1467]: time="2025-01-13T20:48:51.110575114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.110649 containerd[1467]: time="2025-01-13T20:48:51.110662573Z" level=info msg="RemovePodSandbox \"a302df1b3bb7fa89b6b1c5f36ce42442c8b3944f07cf45ffa63c6f308dcf2dfc\" returns successfully" Jan 13 20:48:51.111865 containerd[1467]: time="2025-01-13T20:48:51.111379634Z" level=info msg="StopPodSandbox for \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\"" Jan 13 20:48:51.111865 containerd[1467]: time="2025-01-13T20:48:51.111752317Z" level=info msg="TearDown network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" successfully" Jan 13 20:48:51.112197 containerd[1467]: time="2025-01-13T20:48:51.111867303Z" level=info msg="StopPodSandbox for \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" returns successfully" Jan 13 20:48:51.112823 containerd[1467]: time="2025-01-13T20:48:51.112753619Z" level=info msg="RemovePodSandbox for \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\"" Jan 13 20:48:51.112823 containerd[1467]: time="2025-01-13T20:48:51.112808352Z" level=info msg="Forcibly stopping sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\"" Jan 13 20:48:51.113310 containerd[1467]: time="2025-01-13T20:48:51.112932896Z" level=info msg="TearDown network for sandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" successfully" Jan 13 20:48:51.119667 containerd[1467]: time="2025-01-13T20:48:51.119471618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.119667 containerd[1467]: time="2025-01-13T20:48:51.119611334Z" level=info msg="RemovePodSandbox \"17d32ec5bc20e59d533dce7d1fbdfab09f19b2f71fc31e0e0e15085d8875d87b\" returns successfully" Jan 13 20:48:51.120353 containerd[1467]: time="2025-01-13T20:48:51.120266668Z" level=info msg="StopPodSandbox for \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\"" Jan 13 20:48:51.120461 containerd[1467]: time="2025-01-13T20:48:51.120435704Z" level=info msg="TearDown network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\" successfully" Jan 13 20:48:51.120529 containerd[1467]: time="2025-01-13T20:48:51.120462189Z" level=info msg="StopPodSandbox for \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\" returns successfully" Jan 13 20:48:51.121286 containerd[1467]: time="2025-01-13T20:48:51.121105086Z" level=info msg="RemovePodSandbox for \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\"" Jan 13 20:48:51.121386 containerd[1467]: time="2025-01-13T20:48:51.121323895Z" level=info msg="Forcibly stopping sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\"" Jan 13 20:48:51.123456 containerd[1467]: time="2025-01-13T20:48:51.121606424Z" level=info msg="TearDown network for sandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\" successfully" Jan 13 20:48:51.129777 containerd[1467]: time="2025-01-13T20:48:51.129411051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.129777 containerd[1467]: time="2025-01-13T20:48:51.129530755Z" level=info msg="RemovePodSandbox \"c2588d377c71dd9b0a6c74a035ffc260582ec1ef695791d414a96f3a49256d24\" returns successfully" Jan 13 20:48:51.130924 containerd[1467]: time="2025-01-13T20:48:51.130848897Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:51.131610 containerd[1467]: time="2025-01-13T20:48:51.131077495Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:51.131610 containerd[1467]: time="2025-01-13T20:48:51.131119080Z" level=info msg="StopPodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:51.133342 containerd[1467]: time="2025-01-13T20:48:51.132598242Z" level=info msg="RemovePodSandbox for \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:51.133342 containerd[1467]: time="2025-01-13T20:48:51.133169052Z" level=info msg="Forcibly stopping sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\"" Jan 13 20:48:51.133550 containerd[1467]: time="2025-01-13T20:48:51.133308258Z" level=info msg="TearDown network for sandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" successfully" Jan 13 20:48:51.141389 containerd[1467]: time="2025-01-13T20:48:51.141258102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.142160 containerd[1467]: time="2025-01-13T20:48:51.141403039Z" level=info msg="RemovePodSandbox \"a8bdb50980443e3406d714cd2153dc597f421854d2579ac190eceb791f1ff885\" returns successfully" Jan 13 20:48:51.146554 containerd[1467]: time="2025-01-13T20:48:51.146348064Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:51.147452 containerd[1467]: time="2025-01-13T20:48:51.146885676Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:51.147452 containerd[1467]: time="2025-01-13T20:48:51.146928283Z" level=info msg="StopPodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:51.147651 containerd[1467]: time="2025-01-13T20:48:51.147614349Z" level=info msg="RemovePodSandbox for \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:51.147719 containerd[1467]: time="2025-01-13T20:48:51.147662679Z" level=info msg="Forcibly stopping sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\"" Jan 13 20:48:51.148157 containerd[1467]: time="2025-01-13T20:48:51.147790610Z" level=info msg="TearDown network for sandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" successfully" Jan 13 20:48:51.153044 containerd[1467]: time="2025-01-13T20:48:51.152843878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.153044 containerd[1467]: time="2025-01-13T20:48:51.153030270Z" level=info msg="RemovePodSandbox \"fcb7afff6adc1274495402d9d5a1484659c658ae83dd3ddd6984ef9fa2ce81df\" returns successfully" Jan 13 20:48:51.154602 containerd[1467]: time="2025-01-13T20:48:51.153935175Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:51.154602 containerd[1467]: time="2025-01-13T20:48:51.154172531Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:51.154602 containerd[1467]: time="2025-01-13T20:48:51.154202483Z" level=info msg="StopPodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:51.155547 containerd[1467]: time="2025-01-13T20:48:51.155478798Z" level=info msg="RemovePodSandbox for \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:51.155547 containerd[1467]: time="2025-01-13T20:48:51.155540986Z" level=info msg="Forcibly stopping sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\"" Jan 13 20:48:51.155764 containerd[1467]: time="2025-01-13T20:48:51.155677476Z" level=info msg="TearDown network for sandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" successfully" Jan 13 20:48:51.160595 containerd[1467]: time="2025-01-13T20:48:51.160504429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.160723 containerd[1467]: time="2025-01-13T20:48:51.160637672Z" level=info msg="RemovePodSandbox \"ec978831287f674398ea69f5b7c4d761a7d0a8f3756cc3241ba8d9a107cc2d3e\" returns successfully" Jan 13 20:48:51.161433 containerd[1467]: time="2025-01-13T20:48:51.161299157Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:51.161851 containerd[1467]: time="2025-01-13T20:48:51.161688034Z" level=info msg="TearDown network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" successfully" Jan 13 20:48:51.161851 containerd[1467]: time="2025-01-13T20:48:51.161728688Z" level=info msg="StopPodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" returns successfully" Jan 13 20:48:51.162498 containerd[1467]: time="2025-01-13T20:48:51.162409463Z" level=info msg="RemovePodSandbox for \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:51.162618 containerd[1467]: time="2025-01-13T20:48:51.162496942Z" level=info msg="Forcibly stopping sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\"" Jan 13 20:48:51.162682 containerd[1467]: time="2025-01-13T20:48:51.162622580Z" level=info msg="TearDown network for sandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" successfully" Jan 13 20:48:51.167284 containerd[1467]: time="2025-01-13T20:48:51.167182215Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.167284 containerd[1467]: time="2025-01-13T20:48:51.167262850Z" level=info msg="RemovePodSandbox \"b111510d16a9e1ac705a154413916d3a0102d5f0ec94d473b5f85821aceab06f\" returns successfully" Jan 13 20:48:51.168687 containerd[1467]: time="2025-01-13T20:48:51.168028440Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" Jan 13 20:48:51.168687 containerd[1467]: time="2025-01-13T20:48:51.168211365Z" level=info msg="TearDown network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" successfully" Jan 13 20:48:51.168687 containerd[1467]: time="2025-01-13T20:48:51.168239503Z" level=info msg="StopPodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" returns successfully" Jan 13 20:48:51.169951 containerd[1467]: time="2025-01-13T20:48:51.169910257Z" level=info msg="RemovePodSandbox for \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" Jan 13 20:48:51.170309 containerd[1467]: time="2025-01-13T20:48:51.170141762Z" level=info msg="Forcibly stopping sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\"" Jan 13 20:48:51.170684 containerd[1467]: time="2025-01-13T20:48:51.170526380Z" level=info msg="TearDown network for sandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" successfully" Jan 13 20:48:51.177273 containerd[1467]: time="2025-01-13T20:48:51.176453347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.177273 containerd[1467]: time="2025-01-13T20:48:51.176544744Z" level=info msg="RemovePodSandbox \"57a9acd00975b0c9d700cba1f7a3d626daa380f1fd0a6f95d2b28441df6934b3\" returns successfully" Jan 13 20:48:51.177273 containerd[1467]: time="2025-01-13T20:48:51.177228406Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\"" Jan 13 20:48:51.177554 containerd[1467]: time="2025-01-13T20:48:51.177419848Z" level=info msg="TearDown network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" successfully" Jan 13 20:48:51.177554 containerd[1467]: time="2025-01-13T20:48:51.177537399Z" level=info msg="StopPodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" returns successfully" Jan 13 20:48:51.178691 containerd[1467]: time="2025-01-13T20:48:51.178465311Z" level=info msg="RemovePodSandbox for \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\"" Jan 13 20:48:51.178691 containerd[1467]: time="2025-01-13T20:48:51.178521636Z" level=info msg="Forcibly stopping sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\"" Jan 13 20:48:51.178691 containerd[1467]: time="2025-01-13T20:48:51.178646162Z" level=info msg="TearDown network for sandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" successfully" Jan 13 20:48:51.184182 containerd[1467]: time="2025-01-13T20:48:51.184127476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.184318 containerd[1467]: time="2025-01-13T20:48:51.184205285Z" level=info msg="RemovePodSandbox \"6c92e8873b0b04636569b9058e1435c69543001e834bc93a024494d8996009d9\" returns successfully" Jan 13 20:48:51.187398 containerd[1467]: time="2025-01-13T20:48:51.187043764Z" level=info msg="StopPodSandbox for \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\"" Jan 13 20:48:51.187398 containerd[1467]: time="2025-01-13T20:48:51.187225285Z" level=info msg="TearDown network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" successfully" Jan 13 20:48:51.187398 containerd[1467]: time="2025-01-13T20:48:51.187253032Z" level=info msg="StopPodSandbox for \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" returns successfully" Jan 13 20:48:51.189157 containerd[1467]: time="2025-01-13T20:48:51.188030697Z" level=info msg="RemovePodSandbox for \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\"" Jan 13 20:48:51.189157 containerd[1467]: time="2025-01-13T20:48:51.188080688Z" level=info msg="Forcibly stopping sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\"" Jan 13 20:48:51.189157 containerd[1467]: time="2025-01-13T20:48:51.188203971Z" level=info msg="TearDown network for sandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" successfully" Jan 13 20:48:51.216716 containerd[1467]: time="2025-01-13T20:48:51.216651497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.217096 containerd[1467]: time="2025-01-13T20:48:51.216925939Z" level=info msg="RemovePodSandbox \"2fbdba64327baa847cd080e5605172d6322a3a769b913c02c04a9591db2e4059\" returns successfully" Jan 13 20:48:51.217740 containerd[1467]: time="2025-01-13T20:48:51.217701329Z" level=info msg="StopPodSandbox for \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\"" Jan 13 20:48:51.217850 containerd[1467]: time="2025-01-13T20:48:51.217827357Z" level=info msg="TearDown network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\" successfully" Jan 13 20:48:51.218001 containerd[1467]: time="2025-01-13T20:48:51.217849252Z" level=info msg="StopPodSandbox for \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\" returns successfully" Jan 13 20:48:51.218250 containerd[1467]: time="2025-01-13T20:48:51.218212807Z" level=info msg="RemovePodSandbox for \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\"" Jan 13 20:48:51.218250 containerd[1467]: time="2025-01-13T20:48:51.218244663Z" level=info msg="Forcibly stopping sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\"" Jan 13 20:48:51.218380 containerd[1467]: time="2025-01-13T20:48:51.218321089Z" level=info msg="TearDown network for sandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\" successfully" Jan 13 20:48:51.289304 containerd[1467]: time="2025-01-13T20:48:51.289266749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:51.289851 containerd[1467]: time="2025-01-13T20:48:51.289470537Z" level=info msg="RemovePodSandbox \"c1da7d480240167e216ebd385e1b9690d14e6da6262c872baa3fb6e2b8d30cd6\" returns successfully" Jan 13 20:48:51.991307 kubelet[1856]: E0113 20:48:51.991241 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:52.992238 kubelet[1856]: E0113 20:48:52.992156 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:53.992524 kubelet[1856]: E0113 20:48:53.992396 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:54.992867 kubelet[1856]: E0113 20:48:54.992824 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:55.993311 kubelet[1856]: E0113 20:48:55.993211 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:56.994607 kubelet[1856]: E0113 20:48:56.994479 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:57.995734 kubelet[1856]: E0113 20:48:57.995672 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:58.996474 kubelet[1856]: E0113 20:48:58.996367 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:48:59.997406 kubelet[1856]: E0113 20:48:59.997313 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:00.997838 kubelet[1856]: E0113 20:49:00.997761 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:01.998862 kubelet[1856]: E0113 20:49:01.998774 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:02.999169 kubelet[1856]: E0113 20:49:02.999039 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:04.000181 kubelet[1856]: E0113 20:49:04.000101 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:04.542482 systemd[1]: run-containerd-runc-k8s.io-2ea2c7b27f2f494ece519f2b01b83af09d9b769c7c702d4c7d882a8a60d53a7c-runc.fnY1bI.mount: Deactivated successfully. Jan 13 20:49:05.001151 kubelet[1856]: E0113 20:49:05.001060 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:06.001618 kubelet[1856]: E0113 20:49:06.001514 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:07.002753 kubelet[1856]: E0113 20:49:07.002670 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:08.003283 kubelet[1856]: E0113 20:49:08.003201 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:09.004461 kubelet[1856]: E0113 20:49:09.004365 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:10.004612 kubelet[1856]: E0113 20:49:10.004517 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:10.938751 kubelet[1856]: E0113 20:49:10.938636 1856 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:11.005314 kubelet[1856]: E0113 20:49:11.005194 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:11.199806 systemd[1]: Created slice kubepods-besteffort-pod71ec9a17_dc4d_43a9_89ae_cee6617e4608.slice - libcontainer container kubepods-besteffort-pod71ec9a17_dc4d_43a9_89ae_cee6617e4608.slice. Jan 13 20:49:11.251465 kubelet[1856]: I0113 20:49:11.251082 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dp8m\" (UniqueName: \"kubernetes.io/projected/71ec9a17-dc4d-43a9-89ae-cee6617e4608-kube-api-access-2dp8m\") pod \"test-pod-1\" (UID: \"71ec9a17-dc4d-43a9-89ae-cee6617e4608\") " pod="default/test-pod-1" Jan 13 20:49:11.251465 kubelet[1856]: I0113 20:49:11.251140 1856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4bd48fae-98b1-4396-b512-f967e40895f0\" (UniqueName: \"kubernetes.io/nfs/71ec9a17-dc4d-43a9-89ae-cee6617e4608-pvc-4bd48fae-98b1-4396-b512-f967e40895f0\") pod \"test-pod-1\" (UID: \"71ec9a17-dc4d-43a9-89ae-cee6617e4608\") " pod="default/test-pod-1" Jan 13 20:49:11.415095 kernel: FS-Cache: Loaded Jan 13 20:49:11.517913 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:49:11.518170 kernel: RPC: Registered udp transport module. Jan 13 20:49:11.518224 kernel: RPC: Registered tcp transport module. Jan 13 20:49:11.518914 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:49:11.519306 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:49:11.803386 kernel: NFS: Registering the id_resolver key type Jan 13 20:49:11.803657 kernel: Key type id_resolver registered Jan 13 20:49:11.803707 kernel: Key type id_legacy registered Jan 13 20:49:11.859631 nfsidmap[3805]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 13 20:49:11.872843 nfsidmap[3806]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 13 20:49:12.006114 kubelet[1856]: E0113 20:49:12.006005 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:12.107371 containerd[1467]: time="2025-01-13T20:49:12.107066825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:71ec9a17-dc4d-43a9-89ae-cee6617e4608,Namespace:default,Attempt:0,}" Jan 13 20:49:12.557907 systemd-networkd[1353]: cali5ec59c6bf6e: Link UP Jan 13 20:49:12.561080 systemd-networkd[1353]: cali5ec59c6bf6e: Gained carrier Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.400 [INFO][3807] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.85-k8s-test--pod--1-eth0 default 71ec9a17-dc4d-43a9-89ae-cee6617e4608 1420 0 2025-01-13 20:48:42 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.85 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.400 [INFO][3807] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-eth0" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.467 [INFO][3818] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" HandleID="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Workload="172.24.4.85-k8s-test--pod--1-eth0" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.491 [INFO][3818] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" HandleID="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Workload="172.24.4.85-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319c90), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.85", "pod":"test-pod-1", "timestamp":"2025-01-13 20:49:12.467769175 +0000 UTC"}, Hostname:"172.24.4.85", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.492 [INFO][3818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.492 [INFO][3818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.492 [INFO][3818] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.85' Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.498 [INFO][3818] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.505 [INFO][3818] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.511 [INFO][3818] ipam/ipam.go 489: Trying affinity for 192.168.10.0/26 host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.514 [INFO][3818] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.520 [INFO][3818] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.520 [INFO][3818] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.523 [INFO][3818] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6 Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.536 [INFO][3818] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.547 [INFO][3818] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.4/26] block=192.168.10.0/26 handle="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.548 [INFO][3818] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.4/26] handle="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" host="172.24.4.85" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.548 [INFO][3818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.548 [INFO][3818] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.4/26] IPv6=[] ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" HandleID="k8s-pod-network.e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Workload="172.24.4.85-k8s-test--pod--1-eth0" Jan 13 20:49:12.585806 containerd[1467]: 2025-01-13 20:49:12.551 [INFO][3807] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"71ec9a17-dc4d-43a9-89ae-cee6617e4608", ResourceVersion:"1420", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 48, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:49:12.591192 containerd[1467]: 2025-01-13 20:49:12.551 [INFO][3807] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.4/32] ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-eth0" Jan 13 20:49:12.591192 containerd[1467]: 2025-01-13 20:49:12.551 [INFO][3807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-eth0" Jan 13 20:49:12.591192 containerd[1467]: 2025-01-13 20:49:12.562 [INFO][3807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-eth0" Jan 13 20:49:12.591192 containerd[1467]: 2025-01-13 20:49:12.564 [INFO][3807] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.85-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"71ec9a17-dc4d-43a9-89ae-cee6617e4608", ResourceVersion:"1420", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 48, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.85", ContainerID:"e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0a:df:b7:a4:73:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:49:12.591192 containerd[1467]: 2025-01-13 20:49:12.579 [INFO][3807] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.85-k8s-test--pod--1-eth0" Jan 13 20:49:12.637060 containerd[1467]: time="2025-01-13T20:49:12.633364062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:49:12.637060 containerd[1467]: time="2025-01-13T20:49:12.633420374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:49:12.637060 containerd[1467]: time="2025-01-13T20:49:12.633434603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:49:12.637060 containerd[1467]: time="2025-01-13T20:49:12.633509009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:49:12.659228 systemd[1]: Started cri-containerd-e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6.scope - libcontainer container e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6. Jan 13 20:49:12.700169 containerd[1467]: time="2025-01-13T20:49:12.700128774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:71ec9a17-dc4d-43a9-89ae-cee6617e4608,Namespace:default,Attempt:0,} returns sandbox id \"e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6\"" Jan 13 20:49:12.702395 containerd[1467]: time="2025-01-13T20:49:12.702068551Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:49:13.006787 kubelet[1856]: E0113 20:49:13.006667 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:13.127392 containerd[1467]: time="2025-01-13T20:49:13.127288542Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:49:13.129809 containerd[1467]: time="2025-01-13T20:49:13.129698347Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:49:13.137025 containerd[1467]: time="2025-01-13T20:49:13.136889822Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 434.772445ms" Jan 13 20:49:13.137400 containerd[1467]: time="2025-01-13T20:49:13.137176119Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:49:13.141353 containerd[1467]: time="2025-01-13T20:49:13.141291443Z" level=info msg="CreateContainer within sandbox \"e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:49:13.165817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134113733.mount: Deactivated successfully. Jan 13 20:49:13.177765 containerd[1467]: time="2025-01-13T20:49:13.177642691Z" level=info msg="CreateContainer within sandbox \"e11117964cd6e52680a33d75933f4ba34065a5d76bfc28f6936786dbfb68bdc6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f4ca20147ad9de386f616fb100619606d4339e520771c5b67724769d697292b2\"" Jan 13 20:49:13.179326 containerd[1467]: time="2025-01-13T20:49:13.179234638Z" level=info msg="StartContainer for \"f4ca20147ad9de386f616fb100619606d4339e520771c5b67724769d697292b2\"" Jan 13 20:49:13.242104 systemd[1]: Started cri-containerd-f4ca20147ad9de386f616fb100619606d4339e520771c5b67724769d697292b2.scope - libcontainer container f4ca20147ad9de386f616fb100619606d4339e520771c5b67724769d697292b2. Jan 13 20:49:13.271888 containerd[1467]: time="2025-01-13T20:49:13.271504072Z" level=info msg="StartContainer for \"f4ca20147ad9de386f616fb100619606d4339e520771c5b67724769d697292b2\" returns successfully" Jan 13 20:49:13.777510 systemd-networkd[1353]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:49:13.784066 kubelet[1856]: I0113 20:49:13.783893 1856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=31.346986739 podStartE2EDuration="31.78386103s" podCreationTimestamp="2025-01-13 20:48:42 +0000 UTC" firstStartedPulling="2025-01-13 20:49:12.70164339 +0000 UTC m=+82.976267702" lastFinishedPulling="2025-01-13 20:49:13.138517641 +0000 UTC m=+83.413141993" observedRunningTime="2025-01-13 20:49:13.783409227 +0000 UTC m=+84.058033589" watchObservedRunningTime="2025-01-13 20:49:13.78386103 +0000 UTC m=+84.058485392" Jan 13 20:49:14.008037 kubelet[1856]: E0113 20:49:14.007900 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:15.008609 kubelet[1856]: E0113 20:49:15.008527 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:16.009561 kubelet[1856]: E0113 20:49:16.009434 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:17.010231 kubelet[1856]: E0113 20:49:17.010080 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:18.010820 kubelet[1856]: E0113 20:49:18.010726 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:19.011077 kubelet[1856]: E0113 20:49:19.010941 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:20.011934 kubelet[1856]: E0113 20:49:20.011775 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:21.012106 kubelet[1856]: E0113 20:49:21.011948 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:22.012715 kubelet[1856]: E0113 20:49:22.012600 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:23.013019 kubelet[1856]: E0113 20:49:23.012833 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:24.013357 kubelet[1856]: E0113 20:49:24.013289 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:49:25.014295 kubelet[1856]: E0113 20:49:25.014223 1856 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"