Jan 13 21:26:45.964610 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 21:26:45.964634 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:26:45.964644 kernel: BIOS-provided physical RAM map: Jan 13 21:26:45.964652 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:26:45.964659 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:26:45.964669 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:26:45.964677 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 21:26:45.964699 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 21:26:45.964706 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:26:45.964714 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:26:45.964722 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 21:26:45.964729 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:26:45.964737 kernel: NX (Execute Disable) protection: active Jan 13 21:26:45.964746 kernel: APIC: Static calls initialized Jan 13 21:26:45.964755 kernel: SMBIOS 3.0.0 present. Jan 13 21:26:45.964763 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 21:26:45.964771 kernel: Hypervisor detected: KVM Jan 13 21:26:45.964779 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:26:45.964787 kernel: kvm-clock: using sched offset of 4911844130 cycles Jan 13 21:26:45.964797 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:26:45.964805 kernel: tsc: Detected 1996.249 MHz processor Jan 13 21:26:45.964814 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:26:45.964822 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:26:45.964831 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 21:26:45.964839 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:26:45.964847 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:26:45.964855 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 21:26:45.964863 kernel: ACPI: Early table checksum verification disabled Jan 13 21:26:45.964873 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 21:26:45.964881 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:45.964890 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:45.964898 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:45.964906 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 21:26:45.964914 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:45.964922 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:26:45.964930 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 21:26:45.964940 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 21:26:45.964948 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 21:26:45.964956 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 21:26:45.964964 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 21:26:45.964975 kernel: No NUMA configuration found Jan 13 21:26:45.964984 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 21:26:45.964992 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 21:26:45.965002 kernel: Zone ranges: Jan 13 21:26:45.965011 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:26:45.965019 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:26:45.965027 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:26:45.965036 kernel: Movable zone start for each node Jan 13 21:26:45.965044 kernel: Early memory node ranges Jan 13 21:26:45.965053 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:26:45.965061 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 21:26:45.965071 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:26:45.965080 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 21:26:45.965088 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:26:45.965096 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:26:45.965105 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 21:26:45.965113 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:26:45.965122 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:26:45.965130 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:26:45.965138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:26:45.965148 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:26:45.965157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:26:45.965165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:26:45.965173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:26:45.965181 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:26:45.965189 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:26:45.965198 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:26:45.965206 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 21:26:45.965214 kernel: Booting paravirtualized kernel on KVM Jan 13 21:26:45.965224 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:26:45.965233 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:26:45.965241 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:26:45.965249 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:26:45.965257 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:26:45.965266 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 21:26:45.965275 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:26:45.965284 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:26:45.965294 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:26:45.965303 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:26:45.965311 kernel: Fallback order for Node 0: 0 Jan 13 21:26:45.965319 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 21:26:45.965328 kernel: Policy zone: Normal Jan 13 21:26:45.965336 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:26:45.965344 kernel: software IO TLB: area num 2. Jan 13 21:26:45.965353 kernel: Memory: 3966200K/4193772K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 227312K reserved, 0K cma-reserved) Jan 13 21:26:45.965361 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:26:45.965371 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 21:26:45.965380 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:26:45.965388 kernel: Dynamic Preempt: voluntary Jan 13 21:26:45.965396 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:26:45.965405 kernel: rcu: RCU event tracing is enabled. Jan 13 21:26:45.965414 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:26:45.965422 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:26:45.965431 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:26:45.965439 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:26:45.965447 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:26:45.965457 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:26:45.965466 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:26:45.965474 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:26:45.965497 kernel: Console: colour VGA+ 80x25 Jan 13 21:26:45.965505 kernel: printk: console [tty0] enabled Jan 13 21:26:45.965513 kernel: printk: console [ttyS0] enabled Jan 13 21:26:45.965522 kernel: ACPI: Core revision 20230628 Jan 13 21:26:45.965530 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:26:45.965538 kernel: x2apic enabled Jan 13 21:26:45.965548 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:26:45.965557 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:26:45.965565 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:26:45.965574 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 21:26:45.965582 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:26:45.965591 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:26:45.965599 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:26:45.965608 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:26:45.965616 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:26:45.965627 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:26:45.965635 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:26:45.965644 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 21:26:45.965652 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:26:45.965666 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:26:45.965677 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:26:45.965686 kernel: landlock: Up and running. Jan 13 21:26:45.965694 kernel: SELinux: Initializing. Jan 13 21:26:45.965703 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:26:45.965712 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:26:45.965721 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 21:26:45.965732 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:45.965741 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:45.965750 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:26:45.965759 kernel: Performance Events: AMD PMU driver. Jan 13 21:26:45.965768 kernel: ... version: 0 Jan 13 21:26:45.965778 kernel: ... bit width: 48 Jan 13 21:26:45.965787 kernel: ... generic registers: 4 Jan 13 21:26:45.965796 kernel: ... value mask: 0000ffffffffffff Jan 13 21:26:45.965805 kernel: ... max period: 00007fffffffffff Jan 13 21:26:45.965814 kernel: ... fixed-purpose events: 0 Jan 13 21:26:45.965822 kernel: ... event mask: 000000000000000f Jan 13 21:26:45.965831 kernel: signal: max sigframe size: 1440 Jan 13 21:26:45.965840 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:26:45.965849 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:26:45.965860 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:26:45.965868 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:26:45.965877 kernel: .... node #0, CPUs: #1 Jan 13 21:26:45.965886 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:26:45.965895 kernel: smpboot: Max logical packages: 2 Jan 13 21:26:45.965904 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 21:26:45.965912 kernel: devtmpfs: initialized Jan 13 21:26:45.965921 kernel: x86/mm: Memory block size: 128MB Jan 13 21:26:45.965930 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:26:45.965939 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:26:45.965950 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:26:45.965959 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:26:45.965968 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:26:45.965977 kernel: audit: type=2000 audit(1736803605.660:1): state=initialized audit_enabled=0 res=1 Jan 13 21:26:45.965985 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:26:45.965994 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:26:45.966003 kernel: cpuidle: using governor menu Jan 13 21:26:45.966012 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:26:45.966021 kernel: dca service started, version 1.12.1 Jan 13 21:26:45.966031 kernel: PCI: Using configuration type 1 for base access Jan 13 21:26:45.966040 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:26:45.966049 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:26:45.966058 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:26:45.966067 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:26:45.966076 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:26:45.966085 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:26:45.966093 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:26:45.966102 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:26:45.966113 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:26:45.966122 kernel: ACPI: Interpreter enabled Jan 13 21:26:45.966131 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:26:45.966139 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:26:45.966148 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:26:45.966157 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:26:45.966166 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 21:26:45.966175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:26:45.966302 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:26:45.966403 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:26:45.966512 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:26:45.966526 kernel: acpiphp: Slot [3] registered Jan 13 21:26:45.966535 kernel: acpiphp: Slot [4] registered Jan 13 21:26:45.966544 kernel: acpiphp: Slot [5] registered Jan 13 21:26:45.966553 kernel: acpiphp: Slot [6] registered Jan 13 21:26:45.966561 kernel: acpiphp: Slot [7] registered Jan 13 21:26:45.966573 kernel: acpiphp: Slot [8] registered Jan 13 21:26:45.966582 kernel: acpiphp: Slot [9] registered Jan 13 21:26:45.966591 kernel: acpiphp: Slot [10] registered Jan 13 21:26:45.966599 kernel: acpiphp: Slot [11] registered Jan 13 21:26:45.966608 kernel: acpiphp: Slot [12] registered Jan 13 21:26:45.966617 kernel: acpiphp: Slot [13] registered Jan 13 21:26:45.966626 kernel: acpiphp: Slot [14] registered Jan 13 21:26:45.966634 kernel: acpiphp: Slot [15] registered Jan 13 21:26:45.966643 kernel: acpiphp: Slot [16] registered Jan 13 21:26:45.966653 kernel: acpiphp: Slot [17] registered Jan 13 21:26:45.966662 kernel: acpiphp: Slot [18] registered Jan 13 21:26:45.966671 kernel: acpiphp: Slot [19] registered Jan 13 21:26:45.966679 kernel: acpiphp: Slot [20] registered Jan 13 21:26:45.966688 kernel: acpiphp: Slot [21] registered Jan 13 21:26:45.966697 kernel: acpiphp: Slot [22] registered Jan 13 21:26:45.966705 kernel: acpiphp: Slot [23] registered Jan 13 21:26:45.966714 kernel: acpiphp: Slot [24] registered Jan 13 21:26:45.966723 kernel: acpiphp: Slot [25] registered Jan 13 21:26:45.966731 kernel: acpiphp: Slot [26] registered Jan 13 21:26:45.966742 kernel: acpiphp: Slot [27] registered Jan 13 21:26:45.966751 kernel: acpiphp: Slot [28] registered Jan 13 21:26:45.966760 kernel: acpiphp: Slot [29] registered Jan 13 21:26:45.966768 kernel: acpiphp: Slot [30] registered Jan 13 21:26:45.966777 kernel: acpiphp: Slot [31] registered Jan 13 21:26:45.966786 kernel: PCI host bridge to bus 0000:00 Jan 13 21:26:45.966878 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:26:45.966960 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:26:45.967045 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:26:45.967124 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:26:45.967203 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 21:26:45.967282 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:26:45.967386 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:26:45.967500 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:26:45.967615 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 21:26:45.967713 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 21:26:45.967804 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:26:45.967893 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:26:45.967983 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:26:45.968076 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:26:45.968179 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:26:45.968275 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 21:26:45.968366 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 21:26:45.968465 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 21:26:45.968594 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 21:26:45.968701 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 21:26:45.968798 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 21:26:45.968889 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 21:26:45.968985 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:26:45.969084 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:26:45.969177 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 21:26:45.969272 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 21:26:45.969362 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 21:26:45.969452 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 21:26:45.969590 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:26:45.969689 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:26:45.969781 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 21:26:45.969871 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 21:26:45.969968 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 21:26:45.970061 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 21:26:45.970152 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 21:26:45.970250 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:26:45.970348 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 21:26:45.970438 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 21:26:45.970590 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 21:26:45.970604 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:26:45.970613 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:26:45.970622 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:26:45.970631 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:26:45.970640 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:26:45.970652 kernel: iommu: Default domain type: Translated Jan 13 21:26:45.970661 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:26:45.970670 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:26:45.970679 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:26:45.970687 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:26:45.970696 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 21:26:45.970789 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 21:26:45.970876 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 21:26:45.970968 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:26:45.970982 kernel: vgaarb: loaded Jan 13 21:26:45.970991 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:26:45.970999 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:26:45.971008 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:26:45.971017 kernel: pnp: PnP ACPI init Jan 13 21:26:45.971106 kernel: pnp 00:03: [dma 2] Jan 13 21:26:45.971120 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:26:45.971129 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:26:45.971142 kernel: NET: Registered PF_INET protocol family Jan 13 21:26:45.971150 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:26:45.971160 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:26:45.971169 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:26:45.971178 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:26:45.971187 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:26:45.971196 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:26:45.971205 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:26:45.971214 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:26:45.971224 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:26:45.971234 kernel: NET: Registered PF_XDP protocol family Jan 13 21:26:45.971313 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:26:45.971391 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:26:45.971468 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:26:45.971583 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 21:26:45.971662 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 21:26:45.971751 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 21:26:45.971846 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:26:45.971859 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:26:45.971868 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:26:45.971878 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 21:26:45.971886 kernel: Initialise system trusted keyrings Jan 13 21:26:45.971895 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:26:45.971904 kernel: Key type asymmetric registered Jan 13 21:26:45.971913 kernel: Asymmetric key parser 'x509' registered Jan 13 21:26:45.971924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:26:45.971933 kernel: io scheduler mq-deadline registered Jan 13 21:26:45.971942 kernel: io scheduler kyber registered Jan 13 21:26:45.971951 kernel: io scheduler bfq registered Jan 13 21:26:45.971960 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:26:45.971970 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 21:26:45.971979 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:26:45.971988 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:26:45.971997 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:26:45.972008 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:26:45.972017 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:26:45.972026 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:26:45.972035 kernel: random: crng init done Jan 13 21:26:45.972044 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:26:45.972053 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:26:45.972146 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:26:45.972160 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:26:45.972238 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:26:45.972324 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:26:45 UTC (1736803605) Jan 13 21:26:45.972404 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:26:45.972417 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:26:45.972426 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:26:45.972435 kernel: Segment Routing with IPv6 Jan 13 21:26:45.972444 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:26:45.972452 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:26:45.972461 kernel: Key type dns_resolver registered Jan 13 21:26:45.972473 kernel: IPI shorthand broadcast: enabled Jan 13 21:26:45.972516 kernel: sched_clock: Marking stable (997008033, 169926365)->(1206803430, -39869032) Jan 13 21:26:45.972525 kernel: registered taskstats version 1 Jan 13 21:26:45.972534 kernel: Loading compiled-in X.509 certificates Jan 13 21:26:45.972543 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 21:26:45.972552 kernel: Key type .fscrypt registered Jan 13 21:26:45.972561 kernel: Key type fscrypt-provisioning registered Jan 13 21:26:45.972570 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:26:45.972579 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:26:45.972591 kernel: ima: No architecture policies found Jan 13 21:26:45.972599 kernel: clk: Disabling unused clocks Jan 13 21:26:45.972608 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 21:26:45.972617 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:26:45.972626 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 21:26:45.972635 kernel: Run /init as init process Jan 13 21:26:45.972644 kernel: with arguments: Jan 13 21:26:45.972652 kernel: /init Jan 13 21:26:45.972661 kernel: with environment: Jan 13 21:26:45.972671 kernel: HOME=/ Jan 13 21:26:45.972680 kernel: TERM=linux Jan 13 21:26:45.972697 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:26:45.972708 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:26:45.972720 systemd[1]: Detected virtualization kvm. Jan 13 21:26:45.972729 systemd[1]: Detected architecture x86-64. Jan 13 21:26:45.972739 systemd[1]: Running in initrd. Jan 13 21:26:45.972750 systemd[1]: No hostname configured, using default hostname. Jan 13 21:26:45.972760 systemd[1]: Hostname set to . Jan 13 21:26:45.972770 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:26:45.972779 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:26:45.972789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:45.972799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:45.972809 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:26:45.972827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:26:45.972839 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:26:45.972849 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:26:45.972860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:26:45.972870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:26:45.972880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:45.972892 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:45.972901 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:26:45.972911 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:26:45.972921 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:26:45.972931 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:26:45.972940 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:45.972950 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:45.972960 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:26:45.972972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:26:45.972982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:45.972992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:45.973002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:45.973011 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:26:45.973021 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:26:45.973031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:26:45.973041 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:26:45.973051 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:26:45.973062 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:26:45.973090 systemd-journald[185]: Collecting audit messages is disabled. Jan 13 21:26:45.973114 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:26:45.973125 systemd-journald[185]: Journal started Jan 13 21:26:45.973149 systemd-journald[185]: Runtime Journal (/run/log/journal/92a455c953da48408f23bd9c1c1f0aa0) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:26:45.984544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:45.996890 systemd-modules-load[186]: Inserted module 'overlay' Jan 13 21:26:45.998705 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:26:46.008006 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:46.010085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:46.015559 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:26:46.039503 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:26:46.041063 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 13 21:26:46.083601 kernel: Bridge firewalling registered Jan 13 21:26:46.041802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:26:46.089643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:26:46.090601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:46.093201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:46.093935 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:46.097714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:46.103635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:46.105790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:46.108610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:26:46.122539 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:46.124654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:46.144704 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:26:46.145600 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:46.147606 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:26:46.175732 dracut-cmdline[221]: dracut-dracut-053 Jan 13 21:26:46.177887 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:26:46.180590 systemd-resolved[219]: Positive Trust Anchors: Jan 13 21:26:46.180600 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:26:46.180641 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:26:46.187843 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 13 21:26:46.188826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:26:46.189742 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:46.254551 kernel: SCSI subsystem initialized Jan 13 21:26:46.265545 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:26:46.278507 kernel: iscsi: registered transport (tcp) Jan 13 21:26:46.301992 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:26:46.302055 kernel: QLogic iSCSI HBA Driver Jan 13 21:26:46.365603 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:46.371786 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:26:46.424849 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:26:46.424982 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:26:46.425034 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:26:46.471575 kernel: raid6: sse2x4 gen() 12550 MB/s Jan 13 21:26:46.489561 kernel: raid6: sse2x2 gen() 14446 MB/s Jan 13 21:26:46.508079 kernel: raid6: sse2x1 gen() 9625 MB/s Jan 13 21:26:46.508138 kernel: raid6: using algorithm sse2x2 gen() 14446 MB/s Jan 13 21:26:46.527300 kernel: raid6: .... xor() 9093 MB/s, rmw enabled Jan 13 21:26:46.527365 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:26:46.552063 kernel: xor: measuring software checksum speed Jan 13 21:26:46.552136 kernel: prefetch64-sse : 16990 MB/sec Jan 13 21:26:46.552617 kernel: generic_sse : 15524 MB/sec Jan 13 21:26:46.553865 kernel: xor: using function: prefetch64-sse (16990 MB/sec) Jan 13 21:26:46.737593 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:26:46.749408 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:46.755744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:46.768623 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 13 21:26:46.772537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:46.783800 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:26:46.800054 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 13 21:26:46.836261 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:46.844803 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:26:46.908176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:46.913642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:26:46.929088 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:46.932784 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:46.934823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:46.936286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:26:46.942665 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:26:46.957748 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:46.991072 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 21:26:47.014734 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 21:26:47.014858 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:26:47.014876 kernel: GPT:17805311 != 20971519 Jan 13 21:26:47.014888 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:26:47.014900 kernel: GPT:17805311 != 20971519 Jan 13 21:26:47.014911 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:26:47.014924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:26:47.008398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:47.008560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:47.022571 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:47.025655 kernel: libata version 3.00 loaded. Jan 13 21:26:47.024734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:47.028772 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 21:26:47.069258 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (457) Jan 13 21:26:47.069279 kernel: scsi host0: ata_piix Jan 13 21:26:47.069423 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Jan 13 21:26:47.069440 kernel: scsi host1: ata_piix Jan 13 21:26:47.069966 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 21:26:47.069989 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 21:26:47.024882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:47.027188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:47.033139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:47.071161 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:26:47.111743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:47.123358 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:26:47.128105 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:26:47.128737 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:26:47.135399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:26:47.142701 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:26:47.145041 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:26:47.158781 disk-uuid[509]: Primary Header is updated. Jan 13 21:26:47.158781 disk-uuid[509]: Secondary Entries is updated. Jan 13 21:26:47.158781 disk-uuid[509]: Secondary Header is updated. Jan 13 21:26:47.168513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:26:47.171789 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:48.261573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:26:48.262655 disk-uuid[513]: The operation has completed successfully. Jan 13 21:26:48.352232 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:26:48.352524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:26:48.387707 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:26:48.394220 sh[529]: Success Jan 13 21:26:48.428565 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 21:26:48.484395 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:26:48.494921 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:26:48.497610 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:26:48.521596 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 21:26:48.521674 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:48.523670 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:26:48.525799 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:26:48.528469 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:26:48.550185 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:26:48.552359 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:26:48.561781 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:26:48.565831 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:26:48.589727 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:26:48.589793 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:48.594327 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:26:48.606550 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:26:48.623882 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:26:48.625057 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:26:48.637145 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:26:48.644634 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:26:48.724013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:48.731622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:26:48.781263 systemd-networkd[711]: lo: Link UP Jan 13 21:26:48.781271 systemd-networkd[711]: lo: Gained carrier Jan 13 21:26:48.786433 systemd-networkd[711]: Enumeration completed Jan 13 21:26:48.787958 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:26:48.788651 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:48.788655 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:26:48.789690 systemd-networkd[711]: eth0: Link UP Jan 13 21:26:48.789694 systemd-networkd[711]: eth0: Gained carrier Jan 13 21:26:48.789701 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:48.790711 systemd[1]: Reached target network.target - Network. Jan 13 21:26:48.799156 ignition[654]: Ignition 2.20.0 Jan 13 21:26:48.799657 ignition[654]: Stage: fetch-offline Jan 13 21:26:48.799528 systemd-networkd[711]: eth0: DHCPv4 address 172.24.4.197/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:26:48.799701 ignition[654]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:48.812846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:48.799711 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:26:48.799813 ignition[654]: parsed url from cmdline: "" Jan 13 21:26:48.799817 ignition[654]: no config URL provided Jan 13 21:26:48.799822 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:26:48.800536 ignition[654]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:26:48.800543 ignition[654]: failed to fetch config: resource requires networking Jan 13 21:26:48.809340 ignition[654]: Ignition finished successfully Jan 13 21:26:48.822648 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:26:48.846167 ignition[720]: Ignition 2.20.0 Jan 13 21:26:48.846179 ignition[720]: Stage: fetch Jan 13 21:26:48.846368 ignition[720]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:48.846381 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:26:48.846510 ignition[720]: parsed url from cmdline: "" Jan 13 21:26:48.846514 ignition[720]: no config URL provided Jan 13 21:26:48.846520 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:26:48.846529 ignition[720]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:26:48.846650 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:26:48.846664 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:26:48.846672 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:26:48.974802 ignition[720]: GET result: OK Jan 13 21:26:48.974942 ignition[720]: parsing config with SHA512: 0044e4d4d03cddd63624c61fdc7e57959a5b0afe6d6e04545cf811cf99bdd570cf0953b7003c1756d38d0530f5de3642dbff58ca2c80142e639b7a5f05cfc672 Jan 13 21:26:48.981851 unknown[720]: fetched base config from "system" Jan 13 21:26:48.981873 unknown[720]: fetched base config from "system" Jan 13 21:26:48.982521 ignition[720]: fetch: fetch complete Jan 13 21:26:48.981887 unknown[720]: fetched user config from "openstack" Jan 13 21:26:48.982534 ignition[720]: fetch: fetch passed Jan 13 21:26:48.986036 systemd-resolved[219]: Detected conflict on linux IN A 172.24.4.197 Jan 13 21:26:48.982633 ignition[720]: Ignition finished successfully Jan 13 21:26:48.986051 systemd-resolved[219]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Jan 13 21:26:48.986717 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:26:48.994819 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:26:49.027663 ignition[727]: Ignition 2.20.0 Jan 13 21:26:49.027693 ignition[727]: Stage: kargs Jan 13 21:26:49.028225 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:49.028253 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:26:49.035775 ignition[727]: kargs: kargs passed Jan 13 21:26:49.035886 ignition[727]: Ignition finished successfully Jan 13 21:26:49.038063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:26:49.046754 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:26:49.085621 ignition[733]: Ignition 2.20.0 Jan 13 21:26:49.085651 ignition[733]: Stage: disks Jan 13 21:26:49.086165 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:49.090977 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:26:49.086194 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:26:49.095388 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:49.088453 ignition[733]: disks: disks passed Jan 13 21:26:49.097594 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:26:49.088615 ignition[733]: Ignition finished successfully Jan 13 21:26:49.100886 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:26:49.104011 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:26:49.106611 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:26:49.125837 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:26:49.308942 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:26:49.556220 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:26:49.564770 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:26:49.720535 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 21:26:49.721570 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:26:49.724537 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:26:49.731554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:49.746601 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:26:49.747345 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:26:49.751241 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:26:49.752757 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:26:49.754047 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:49.756978 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:26:49.770048 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (750) Jan 13 21:26:49.770096 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:26:49.767462 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:26:49.796737 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:49.796762 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:26:49.796775 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:26:49.797678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:49.879510 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:26:49.886164 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:26:49.892559 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:26:49.898294 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:26:50.032325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:50.039692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:26:50.043008 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:26:50.050509 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:26:50.051048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:26:50.077519 ignition[869]: INFO : Ignition 2.20.0 Jan 13 21:26:50.077519 ignition[869]: INFO : Stage: mount Jan 13 21:26:50.077519 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:50.077519 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:26:50.082793 ignition[869]: INFO : mount: mount passed Jan 13 21:26:50.082793 ignition[869]: INFO : Ignition finished successfully Jan 13 21:26:50.080509 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:26:50.093956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:26:50.382743 systemd-networkd[711]: eth0: Gained IPv6LL Jan 13 21:26:57.007121 coreos-metadata[752]: Jan 13 21:26:57.007 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:26:57.050602 coreos-metadata[752]: Jan 13 21:26:57.050 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:26:57.067519 coreos-metadata[752]: Jan 13 21:26:57.067 INFO Fetch successful Jan 13 21:26:57.067519 coreos-metadata[752]: Jan 13 21:26:57.067 INFO wrote hostname ci-4152-2-0-1-639505821b.novalocal to /sysroot/etc/hostname Jan 13 21:26:57.071638 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:26:57.071838 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:26:57.083765 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:26:57.109867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:26:57.128570 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (886) Jan 13 21:26:57.136591 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:26:57.136701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:26:57.140865 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:26:57.153573 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:26:57.158739 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:26:57.203593 ignition[904]: INFO : Ignition 2.20.0 Jan 13 21:26:57.205437 ignition[904]: INFO : Stage: files Jan 13 21:26:57.205437 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:57.205437 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:26:57.210006 ignition[904]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:26:57.210006 ignition[904]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:26:57.210006 ignition[904]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:26:57.215761 ignition[904]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:26:57.216709 ignition[904]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:26:57.217533 ignition[904]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:26:57.216766 unknown[904]: wrote ssh authorized keys file for user: core Jan 13 21:26:57.220842 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:57.221757 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:26:57.222633 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:57.222633 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:26:57.222633 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:26:57.222633 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:26:57.226809 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:26:57.226809 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:26:57.645721 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:26:59.235635 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:26:59.238816 ignition[904]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:59.238816 ignition[904]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:26:59.238816 ignition[904]: INFO : files: files passed Jan 13 21:26:59.238816 ignition[904]: INFO : Ignition finished successfully Jan 13 21:26:59.238088 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:26:59.247808 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:26:59.252669 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:26:59.256159 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:26:59.256784 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:26:59.277967 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:59.277967 initrd-setup-root-after-ignition[933]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:59.280088 initrd-setup-root-after-ignition[937]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:26:59.282958 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:59.283770 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:26:59.291743 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:26:59.314705 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:26:59.314930 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:26:59.317368 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:26:59.319011 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:26:59.320961 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:26:59.332610 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:26:59.355683 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:59.366781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:26:59.385843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:59.387531 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:26:59.390180 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:26:59.392541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:26:59.392851 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:26:59.395341 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:26:59.397068 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:26:59.399471 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:26:59.401871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:26:59.412311 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:26:59.414511 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:26:59.416731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:26:59.418951 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:26:59.421053 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:26:59.423220 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:26:59.425241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:26:59.425351 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:26:59.427702 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:59.428855 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:26:59.430627 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:26:59.430725 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:26:59.432821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:26:59.432957 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:26:59.436092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:26:59.436231 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:26:59.437323 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:26:59.437441 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:26:59.446646 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:26:59.449449 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:26:59.450124 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:26:59.452684 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:26:59.454659 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:26:59.455607 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:26:59.462407 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:26:59.463532 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:26:59.468514 ignition[957]: INFO : Ignition 2.20.0 Jan 13 21:26:59.468514 ignition[957]: INFO : Stage: umount Jan 13 21:26:59.470542 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:26:59.470542 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:26:59.470542 ignition[957]: INFO : umount: umount passed Jan 13 21:26:59.470542 ignition[957]: INFO : Ignition finished successfully Jan 13 21:26:59.472660 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:26:59.472763 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:26:59.474179 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:26:59.474247 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:26:59.475709 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:26:59.475749 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:26:59.477099 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:26:59.477137 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:26:59.477738 systemd[1]: Stopped target network.target - Network. Jan 13 21:26:59.478913 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:26:59.478964 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:26:59.480134 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:26:59.481538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:26:59.485760 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:26:59.486575 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:26:59.487016 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:26:59.487498 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:26:59.487532 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:26:59.488072 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:26:59.488103 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:26:59.489347 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:26:59.489386 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:26:59.490386 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:26:59.490426 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:26:59.491518 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:26:59.492530 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:26:59.494372 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:26:59.494884 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:26:59.494964 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:26:59.495924 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:26:59.496000 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:26:59.496271 systemd-networkd[711]: eth0: DHCPv6 lease lost Jan 13 21:26:59.497836 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:26:59.497921 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:26:59.499966 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:26:59.500077 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:26:59.502709 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:26:59.502977 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:26:59.509625 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:26:59.511625 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:26:59.511678 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:26:59.512718 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:26:59.512759 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:59.513841 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:26:59.513881 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:26:59.515080 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:26:59.515119 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:59.516191 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:59.523606 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:26:59.523718 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:26:59.524944 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:26:59.525060 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:59.526445 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:26:59.526512 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:26:59.527591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:26:59.527623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:26:59.528742 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:26:59.528783 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:26:59.534107 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:26:59.534147 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:26:59.535275 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:26:59.535315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:26:59.541659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:26:59.542414 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:26:59.542463 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:59.543024 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:26:59.543065 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:59.543609 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:26:59.543647 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:26:59.544174 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:26:59.544211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:59.548075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:26:59.548188 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:26:59.549276 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:26:59.558624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:26:59.565160 systemd[1]: Switching root. Jan 13 21:26:59.630143 systemd-journald[185]: Journal stopped Jan 13 21:27:01.653441 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 13 21:27:01.655970 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:27:01.655992 kernel: SELinux: policy capability open_perms=1 Jan 13 21:27:01.656003 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:27:01.656014 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:27:01.656025 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:27:01.656037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:27:01.656048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:27:01.656058 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:27:01.656071 kernel: audit: type=1403 audit(1736803620.490:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:27:01.656084 systemd[1]: Successfully loaded SELinux policy in 85.296ms. Jan 13 21:27:01.656100 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.391ms. Jan 13 21:27:01.656113 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:27:01.656125 systemd[1]: Detected virtualization kvm. Jan 13 21:27:01.656139 systemd[1]: Detected architecture x86-64. Jan 13 21:27:01.656151 systemd[1]: Detected first boot. Jan 13 21:27:01.656163 systemd[1]: Hostname set to . Jan 13 21:27:01.656174 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:27:01.656186 zram_generator::config[1002]: No configuration found. Jan 13 21:27:01.656198 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:27:01.656209 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:27:01.656223 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:27:01.656235 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:27:01.656247 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:27:01.656259 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:27:01.656270 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:27:01.656282 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:27:01.656297 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:27:01.656308 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:27:01.656322 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:27:01.656334 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:27:01.656346 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:27:01.656358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:27:01.656370 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:27:01.656381 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:27:01.656393 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:27:01.656405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:27:01.656417 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:27:01.656430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:27:01.656442 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:27:01.656454 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:27:01.656466 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:27:01.656754 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:27:01.656773 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:27:01.656788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:27:01.656800 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:27:01.656812 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:27:01.656823 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:27:01.656835 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:27:01.656847 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:27:01.656859 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:27:01.656871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:27:01.656883 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:27:01.656895 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:27:01.656909 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:27:01.656921 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:27:01.656932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:01.656944 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:27:01.656956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:27:01.656967 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:27:01.656979 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:27:01.656991 systemd[1]: Reached target machines.target - Containers. Jan 13 21:27:01.657005 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:27:01.657017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:27:01.657028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:27:01.657040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:27:01.657052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:27:01.657064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:27:01.657076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:27:01.657088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:27:01.657443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:27:01.657458 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:27:01.657470 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:27:01.657497 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:27:01.657510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:27:01.657521 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:27:01.657533 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:27:01.657544 kernel: fuse: init (API version 7.39) Jan 13 21:27:01.657556 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:27:01.657571 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:27:01.657583 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:27:01.657594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:27:01.657606 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:27:01.657618 systemd[1]: Stopped verity-setup.service. Jan 13 21:27:01.657630 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:01.657641 kernel: loop: module loaded Jan 13 21:27:01.657652 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:27:01.657664 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:27:01.657678 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:27:01.657690 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:27:01.657701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:27:01.657712 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:27:01.657724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:27:01.657738 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:27:01.657750 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:27:01.657762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:27:01.657773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:27:01.657787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:27:01.657800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:27:01.657812 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:27:01.657827 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:27:01.657861 systemd-journald[1098]: Collecting audit messages is disabled. Jan 13 21:27:01.657887 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:27:01.657902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:27:01.657914 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:27:01.657926 systemd-journald[1098]: Journal started Jan 13 21:27:01.657950 systemd-journald[1098]: Runtime Journal (/run/log/journal/92a455c953da48408f23bd9c1c1f0aa0) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:27:01.261633 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:27:01.287826 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:27:01.288212 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:27:01.661975 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:27:01.661382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:27:01.667187 kernel: ACPI: bus type drm_connector registered Jan 13 21:27:01.662914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:27:01.665700 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:27:01.670840 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:27:01.670996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:27:01.677948 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:27:01.684235 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:27:01.689346 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:27:01.690416 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:27:01.690531 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:27:01.692153 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:27:01.698560 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:27:01.703649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:27:01.704717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:27:01.709597 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:27:01.711660 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:27:01.712615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:27:01.720917 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:27:01.722311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:27:01.726634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:27:01.728645 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:27:01.731937 systemd-journald[1098]: Time spent on flushing to /var/log/journal/92a455c953da48408f23bd9c1c1f0aa0 is 64.071ms for 927 entries. Jan 13 21:27:01.731937 systemd-journald[1098]: System Journal (/var/log/journal/92a455c953da48408f23bd9c1c1f0aa0) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:27:01.849327 systemd-journald[1098]: Received client request to flush runtime journal. Jan 13 21:27:01.849382 kernel: loop0: detected capacity change from 0 to 210664 Jan 13 21:27:01.733833 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:27:01.739266 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:27:01.745740 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:27:01.746496 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:27:01.747308 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:27:01.759254 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:27:01.760828 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:27:01.763048 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:27:01.770978 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:27:01.799625 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:27:01.831538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:27:01.842734 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. Jan 13 21:27:01.842748 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. Jan 13 21:27:01.848186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:27:01.858635 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:27:01.859544 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:27:01.868434 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:27:01.869075 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:27:01.884506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:27:01.914898 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:27:01.925570 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 21:27:01.924409 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:27:01.957458 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 13 21:27:01.957791 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 13 21:27:01.968511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:27:01.982501 kernel: loop2: detected capacity change from 0 to 8 Jan 13 21:27:02.003930 kernel: loop3: detected capacity change from 0 to 140992 Jan 13 21:27:02.097504 kernel: loop4: detected capacity change from 0 to 210664 Jan 13 21:27:02.152748 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 21:27:02.209713 kernel: loop6: detected capacity change from 0 to 8 Jan 13 21:27:02.212525 kernel: loop7: detected capacity change from 0 to 140992 Jan 13 21:27:02.266035 (sd-merge)[1163]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:27:02.266987 (sd-merge)[1163]: Merged extensions into '/usr'. Jan 13 21:27:02.271817 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:27:02.271830 systemd[1]: Reloading... Jan 13 21:27:02.371532 zram_generator::config[1186]: No configuration found. Jan 13 21:27:02.572337 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:02.613845 ldconfig[1130]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:27:02.631047 systemd[1]: Reloading finished in 358 ms. Jan 13 21:27:02.661006 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:27:02.661943 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:27:02.662714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:27:02.670653 systemd[1]: Starting ensure-sysext.service... Jan 13 21:27:02.684685 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:27:02.689571 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:27:02.702619 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:27:02.702954 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:27:02.703596 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:27:02.703610 systemd[1]: Reloading... Jan 13 21:27:02.705810 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:27:02.706124 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 13 21:27:02.706192 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 13 21:27:02.713128 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:27:02.713139 systemd-tmpfiles[1248]: Skipping /boot Jan 13 21:27:02.728102 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:27:02.728116 systemd-tmpfiles[1248]: Skipping /boot Jan 13 21:27:02.751579 systemd-udevd[1249]: Using default interface naming scheme 'v255'. Jan 13 21:27:02.789549 zram_generator::config[1278]: No configuration found. Jan 13 21:27:02.846506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1297) Jan 13 21:27:02.955526 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 21:27:02.984520 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:27:02.988019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:03.020502 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:27:03.028497 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:27:03.047502 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:27:03.057376 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:27:03.057998 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:27:03.059089 systemd[1]: Reloading finished in 355 ms. Jan 13 21:27:03.071071 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 21:27:03.071128 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 21:27:03.070173 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:27:03.075709 kernel: Console: switching to colour dummy device 80x25 Jan 13 21:27:03.077561 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 21:27:03.077597 kernel: [drm] features: -context_init Jan 13 21:27:03.079917 kernel: [drm] number of scanouts: 1 Jan 13 21:27:03.079954 kernel: [drm] number of cap sets: 0 Jan 13 21:27:03.084097 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 21:27:03.082931 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:27:03.096906 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 21:27:03.096988 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 21:27:03.104504 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 21:27:03.113984 systemd[1]: Finished ensure-sysext.service. Jan 13 21:27:03.120301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:03.124634 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:27:03.129615 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:27:03.129822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:27:03.132788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:27:03.135976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:27:03.140655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:27:03.144467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:27:03.145491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:27:03.147784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:27:03.149433 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:27:03.157754 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:27:03.159552 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:27:03.176672 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:27:03.181689 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:27:03.183774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:27:03.185368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:03.187111 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:27:03.188289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:27:03.189557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:27:03.189948 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:27:03.190117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:27:03.197682 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:27:03.203393 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:27:03.203797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:27:03.218664 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:27:03.218786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:27:03.221423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:27:03.247530 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:27:03.247940 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:27:03.260882 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:27:03.260723 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:27:03.261728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:27:03.264803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:27:03.269423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:27:03.274925 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:27:03.277907 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:27:03.293891 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:27:03.299033 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:27:03.300190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:27:03.313594 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:27:03.323678 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:27:03.324506 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:27:03.333944 augenrules[1416]: No rules Jan 13 21:27:03.343164 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:27:03.343749 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:27:03.371655 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:27:03.382577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:27:03.420496 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:27:03.422362 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:27:03.434329 systemd-resolved[1379]: Positive Trust Anchors: Jan 13 21:27:03.434344 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:27:03.434386 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:27:03.434805 systemd-networkd[1378]: lo: Link UP Jan 13 21:27:03.434986 systemd-networkd[1378]: lo: Gained carrier Jan 13 21:27:03.436184 systemd-networkd[1378]: Enumeration completed Jan 13 21:27:03.436557 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:27:03.439769 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:03.439967 systemd-resolved[1379]: Using system hostname 'ci-4152-2-0-1-639505821b.novalocal'. Jan 13 21:27:03.440741 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:27:03.442940 systemd-networkd[1378]: eth0: Link UP Jan 13 21:27:03.442950 systemd-networkd[1378]: eth0: Gained carrier Jan 13 21:27:03.442974 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:03.448132 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:27:03.449257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:27:03.449826 systemd[1]: Reached target network.target - Network. Jan 13 21:27:03.450254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:27:03.453100 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:27:03.455523 systemd-networkd[1378]: eth0: DHCPv4 address 172.24.4.197/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:27:03.456059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:27:03.456573 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:27:03.457205 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:27:03.457571 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Jan 13 21:27:03.462419 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:27:03.462955 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:27:03.463392 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:27:03.463424 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:27:03.465943 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:27:03.466770 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:27:03.469845 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:27:03.477752 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:27:03.483640 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:27:03.484861 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:27:03.487800 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:27:03.488822 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:27:03.488873 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:27:03.504647 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:27:03.509539 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:27:03.515642 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:27:03.525327 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:27:03.533667 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:27:03.534403 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:27:03.538957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:27:03.547288 jq[1442]: false Jan 13 21:27:03.550100 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:27:03.558562 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:27:03.575189 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:27:03.578387 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:27:03.578853 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:27:03.583290 extend-filesystems[1443]: Found loop4 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found loop5 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found loop6 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found loop7 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda1 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda2 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda3 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found usr Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda4 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda6 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda7 Jan 13 21:27:03.583290 extend-filesystems[1443]: Found vda9 Jan 13 21:27:03.583290 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 13 21:27:03.665897 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1322) Jan 13 21:27:03.633541 dbus-daemon[1439]: [system] SELinux support is enabled Jan 13 21:27:03.726149 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 21:27:03.726192 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 21:27:03.726236 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 13 21:27:03.587733 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:27:03.742281 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:27:03.742281 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:27:03.742281 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:27:03.742281 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 21:27:03.605714 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:27:03.753804 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 13 21:27:03.610292 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:27:03.761793 update_engine[1455]: I20250113 21:27:03.633956 1455 main.cc:92] Flatcar Update Engine starting Jan 13 21:27:03.761793 update_engine[1455]: I20250113 21:27:03.658718 1455 update_check_scheduler.cc:74] Next update check in 11m39s Jan 13 21:27:03.610453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:27:03.762130 jq[1458]: true Jan 13 21:27:03.610737 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:27:03.610865 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:27:03.763500 jq[1466]: true Jan 13 21:27:03.612565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:27:03.612717 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:27:03.637559 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:27:03.655200 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:27:03.655228 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:27:03.661639 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:27:03.661662 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:27:03.668369 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:27:03.690037 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:27:03.699637 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:27:03.713915 systemd-logind[1450]: New seat seat0. Jan 13 21:27:03.723922 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:27:03.723939 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:27:03.724209 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:27:03.737863 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:27:03.738021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:27:03.820026 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:27:03.821041 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:27:03.837758 systemd[1]: Starting sshkeys.service... Jan 13 21:27:03.849050 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:27:03.861079 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:27:03.869810 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:27:04.065683 containerd[1472]: time="2025-01-13T21:27:04.065571214Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 21:27:04.109542 containerd[1472]: time="2025-01-13T21:27:04.105452402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:04.110963 containerd[1472]: time="2025-01-13T21:27:04.110920186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:04.110963 containerd[1472]: time="2025-01-13T21:27:04.110953499Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:27:04.111027 containerd[1472]: time="2025-01-13T21:27:04.110971633Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:27:04.111180 containerd[1472]: time="2025-01-13T21:27:04.111149697Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:27:04.111180 containerd[1472]: time="2025-01-13T21:27:04.111176587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111263 containerd[1472]: time="2025-01-13T21:27:04.111241188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111291 containerd[1472]: time="2025-01-13T21:27:04.111261867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111453 containerd[1472]: time="2025-01-13T21:27:04.111419803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111453 containerd[1472]: time="2025-01-13T21:27:04.111445882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111523 containerd[1472]: time="2025-01-13T21:27:04.111462103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111523 containerd[1472]: time="2025-01-13T21:27:04.111473865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111590 containerd[1472]: time="2025-01-13T21:27:04.111569664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111817 containerd[1472]: time="2025-01-13T21:27:04.111786251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111917 containerd[1472]: time="2025-01-13T21:27:04.111895405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:04.111942 containerd[1472]: time="2025-01-13T21:27:04.111917156Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:27:04.112013 containerd[1472]: time="2025-01-13T21:27:04.111994411Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:27:04.112066 containerd[1472]: time="2025-01-13T21:27:04.112049575Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:27:04.121549 containerd[1472]: time="2025-01-13T21:27:04.121514536Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:27:04.121607 containerd[1472]: time="2025-01-13T21:27:04.121559080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:27:04.121607 containerd[1472]: time="2025-01-13T21:27:04.121578426Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:27:04.121607 containerd[1472]: time="2025-01-13T21:27:04.121597432Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:27:04.121668 containerd[1472]: time="2025-01-13T21:27:04.121611438Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:27:04.121749 containerd[1472]: time="2025-01-13T21:27:04.121725141Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122015746Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122172721Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122191516Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122208568Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122226561Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122240888Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122254173Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122269442Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122284550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122298366Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122311310Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122323473Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122344002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122506 containerd[1472]: time="2025-01-13T21:27:04.122358419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122371413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122384778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122397943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122412530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122425845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122438920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122452285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122789 containerd[1472]: time="2025-01-13T21:27:04.122468265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.122979 containerd[1472]: time="2025-01-13T21:27:04.122961710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.123039 containerd[1472]: time="2025-01-13T21:27:04.123025720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.123096 containerd[1472]: time="2025-01-13T21:27:04.123082577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.123155 containerd[1472]: time="2025-01-13T21:27:04.123142409Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:27:04.123219 containerd[1472]: time="2025-01-13T21:27:04.123206139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.123279 containerd[1472]: time="2025-01-13T21:27:04.123266041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.123342 containerd[1472]: time="2025-01-13T21:27:04.123328588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:27:04.123436 containerd[1472]: time="2025-01-13T21:27:04.123420942Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:27:04.123538 containerd[1472]: time="2025-01-13T21:27:04.123520428Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:27:04.123599 containerd[1472]: time="2025-01-13T21:27:04.123586212Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:27:04.124506 containerd[1472]: time="2025-01-13T21:27:04.123643218Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:27:04.124506 containerd[1472]: time="2025-01-13T21:27:04.123658637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.124506 containerd[1472]: time="2025-01-13T21:27:04.123673585Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:27:04.124506 containerd[1472]: time="2025-01-13T21:27:04.123684285Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:27:04.124506 containerd[1472]: time="2025-01-13T21:27:04.123694996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:27:04.124669 containerd[1472]: time="2025-01-13T21:27:04.123976894Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:27:04.124669 containerd[1472]: time="2025-01-13T21:27:04.124031427Z" level=info msg="Connect containerd service" Jan 13 21:27:04.124669 containerd[1472]: time="2025-01-13T21:27:04.124060852Z" level=info msg="using legacy CRI server" Jan 13 21:27:04.124669 containerd[1472]: time="2025-01-13T21:27:04.124067604Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:27:04.124669 containerd[1472]: time="2025-01-13T21:27:04.124172952Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:27:04.125188 containerd[1472]: time="2025-01-13T21:27:04.125130428Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:27:04.125402 containerd[1472]: time="2025-01-13T21:27:04.125361992Z" level=info msg="Start subscribing containerd event" Jan 13 21:27:04.125440 containerd[1472]: time="2025-01-13T21:27:04.125411395Z" level=info msg="Start recovering state" Jan 13 21:27:04.125501 containerd[1472]: time="2025-01-13T21:27:04.125465717Z" level=info msg="Start event monitor" Jan 13 21:27:04.125533 containerd[1472]: time="2025-01-13T21:27:04.125501313Z" level=info msg="Start snapshots syncer" Jan 13 21:27:04.125533 containerd[1472]: time="2025-01-13T21:27:04.125514678Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:27:04.125533 containerd[1472]: time="2025-01-13T21:27:04.125522373Z" level=info msg="Start streaming server" Jan 13 21:27:04.125735 containerd[1472]: time="2025-01-13T21:27:04.125711588Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:27:04.125870 containerd[1472]: time="2025-01-13T21:27:04.125853113Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:27:04.126034 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:27:04.132207 containerd[1472]: time="2025-01-13T21:27:04.132179188Z" level=info msg="containerd successfully booted in 0.068007s" Jan 13 21:27:04.307232 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:27:04.333915 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:27:04.343170 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:27:04.358357 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:27:04.358591 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:27:04.371196 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:27:04.379046 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:27:04.393312 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:27:04.398557 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:27:04.401934 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:27:05.487052 systemd-networkd[1378]: eth0: Gained IPv6LL Jan 13 21:27:05.488875 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Jan 13 21:27:05.490632 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:27:05.497170 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:27:05.506159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:05.522839 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:27:05.568683 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:27:05.592268 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:27:05.608188 systemd[1]: Started sshd@0-172.24.4.197:22-172.24.4.1:42766.service - OpenSSH per-connection server daemon (172.24.4.1:42766). Jan 13 21:27:06.883673 sshd[1542]: Accepted publickey for core from 172.24.4.1 port 42766 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:06.889833 sshd-session[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:06.920304 systemd-logind[1450]: New session 1 of user core. Jan 13 21:27:06.925711 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:27:06.938201 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:27:06.969835 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:27:06.982450 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:27:06.989253 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:27:07.107663 systemd[1548]: Queued start job for default target default.target. Jan 13 21:27:07.115422 systemd[1548]: Created slice app.slice - User Application Slice. Jan 13 21:27:07.115453 systemd[1548]: Reached target paths.target - Paths. Jan 13 21:27:07.115469 systemd[1548]: Reached target timers.target - Timers. Jan 13 21:27:07.116871 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:27:07.148081 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:27:07.148310 systemd[1548]: Reached target sockets.target - Sockets. Jan 13 21:27:07.148564 systemd[1548]: Reached target basic.target - Basic System. Jan 13 21:27:07.148658 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:27:07.151519 systemd[1548]: Reached target default.target - Main User Target. Jan 13 21:27:07.151938 systemd[1548]: Startup finished in 156ms. Jan 13 21:27:07.154754 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:27:07.477788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:07.501083 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:07.653952 systemd[1]: Started sshd@1-172.24.4.197:22-172.24.4.1:42770.service - OpenSSH per-connection server daemon (172.24.4.1:42770). Jan 13 21:27:08.983352 sshd[1565]: Accepted publickey for core from 172.24.4.1 port 42770 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:08.986237 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:09.002588 systemd-logind[1450]: New session 2 of user core. Jan 13 21:27:09.007882 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:27:09.388021 kubelet[1562]: E0113 21:27:09.387779 1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:09.393270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:09.393661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:09.394350 systemd[1]: kubelet.service: Consumed 2.275s CPU time. Jan 13 21:27:09.448737 login[1527]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:27:09.457530 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:27:09.461309 systemd-logind[1450]: New session 3 of user core. Jan 13 21:27:09.470684 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:27:09.475525 systemd-logind[1450]: New session 4 of user core. Jan 13 21:27:09.478734 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:27:09.732755 sshd[1572]: Connection closed by 172.24.4.1 port 42770 Jan 13 21:27:09.735658 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:09.748567 systemd[1]: sshd@1-172.24.4.197:22-172.24.4.1:42770.service: Deactivated successfully. Jan 13 21:27:09.752002 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:27:09.754147 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:27:09.768307 systemd[1]: Started sshd@2-172.24.4.197:22-172.24.4.1:42780.service - OpenSSH per-connection server daemon (172.24.4.1:42780). Jan 13 21:27:09.773198 systemd-logind[1450]: Removed session 2. Jan 13 21:27:10.609847 coreos-metadata[1438]: Jan 13 21:27:10.609 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:27:10.658550 coreos-metadata[1438]: Jan 13 21:27:10.658 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:27:10.843447 coreos-metadata[1438]: Jan 13 21:27:10.843 INFO Fetch successful Jan 13 21:27:10.843447 coreos-metadata[1438]: Jan 13 21:27:10.843 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:27:10.857323 coreos-metadata[1438]: Jan 13 21:27:10.857 INFO Fetch successful Jan 13 21:27:10.857454 coreos-metadata[1438]: Jan 13 21:27:10.857 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:27:10.874234 coreos-metadata[1438]: Jan 13 21:27:10.873 INFO Fetch successful Jan 13 21:27:10.874234 coreos-metadata[1438]: Jan 13 21:27:10.874 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:27:10.889026 coreos-metadata[1438]: Jan 13 21:27:10.888 INFO Fetch successful Jan 13 21:27:10.889241 coreos-metadata[1438]: Jan 13 21:27:10.889 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:27:10.902085 coreos-metadata[1438]: Jan 13 21:27:10.901 INFO Fetch successful Jan 13 21:27:10.902085 coreos-metadata[1438]: Jan 13 21:27:10.901 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:27:10.912826 coreos-metadata[1438]: Jan 13 21:27:10.912 INFO Fetch successful Jan 13 21:27:10.954342 coreos-metadata[1501]: Jan 13 21:27:10.954 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:27:10.954906 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:27:10.956205 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:27:10.997945 coreos-metadata[1501]: Jan 13 21:27:10.997 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:27:11.010946 coreos-metadata[1501]: Jan 13 21:27:11.010 INFO Fetch successful Jan 13 21:27:11.011068 coreos-metadata[1501]: Jan 13 21:27:11.011 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:27:11.021043 coreos-metadata[1501]: Jan 13 21:27:11.020 INFO Fetch successful Jan 13 21:27:11.025392 sshd[1603]: Accepted publickey for core from 172.24.4.1 port 42780 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:11.027326 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:11.032158 unknown[1501]: wrote ssh authorized keys file for user: core Jan 13 21:27:11.034456 systemd-logind[1450]: New session 5 of user core. Jan 13 21:27:11.038285 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:27:11.072607 update-ssh-keys[1614]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:27:11.073734 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:27:11.075730 systemd[1]: Finished sshkeys.service. Jan 13 21:27:11.081257 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:27:11.081611 systemd[1]: Startup finished in 1.138s (kernel) + 14.739s (initrd) + 10.676s (userspace) = 26.554s. Jan 13 21:27:11.516925 sshd[1616]: Connection closed by 172.24.4.1 port 42780 Jan 13 21:27:11.516744 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:11.522592 systemd[1]: sshd@2-172.24.4.197:22-172.24.4.1:42780.service: Deactivated successfully. Jan 13 21:27:11.526151 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:27:11.529025 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:27:11.531059 systemd-logind[1450]: Removed session 5. Jan 13 21:27:19.644778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:27:19.660547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:19.986645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:20.001114 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:20.095156 kubelet[1629]: E0113 21:27:20.095090 1629 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:20.098856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:20.099017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:21.537040 systemd[1]: Started sshd@3-172.24.4.197:22-172.24.4.1:36814.service - OpenSSH per-connection server daemon (172.24.4.1:36814). Jan 13 21:27:22.783681 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 36814 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:22.786369 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:22.795403 systemd-logind[1450]: New session 6 of user core. Jan 13 21:27:22.807775 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:27:23.314560 sshd[1640]: Connection closed by 172.24.4.1 port 36814 Jan 13 21:27:23.315171 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:23.324024 systemd[1]: sshd@3-172.24.4.197:22-172.24.4.1:36814.service: Deactivated successfully. Jan 13 21:27:23.326984 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:27:23.328460 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:27:23.337057 systemd[1]: Started sshd@4-172.24.4.197:22-172.24.4.1:41616.service - OpenSSH per-connection server daemon (172.24.4.1:41616). Jan 13 21:27:23.339467 systemd-logind[1450]: Removed session 6. Jan 13 21:27:24.544344 sshd[1645]: Accepted publickey for core from 172.24.4.1 port 41616 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:24.547194 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:24.557008 systemd-logind[1450]: New session 7 of user core. Jan 13 21:27:24.568814 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:27:25.136229 sshd[1647]: Connection closed by 172.24.4.1 port 41616 Jan 13 21:27:25.137258 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:25.150948 systemd[1]: sshd@4-172.24.4.197:22-172.24.4.1:41616.service: Deactivated successfully. Jan 13 21:27:25.154424 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:27:25.157801 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:27:25.164020 systemd[1]: Started sshd@5-172.24.4.197:22-172.24.4.1:41624.service - OpenSSH per-connection server daemon (172.24.4.1:41624). Jan 13 21:27:25.167027 systemd-logind[1450]: Removed session 7. Jan 13 21:27:26.541105 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 41624 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:26.543715 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:26.553383 systemd-logind[1450]: New session 8 of user core. Jan 13 21:27:26.566763 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:27:27.183737 sshd[1654]: Connection closed by 172.24.4.1 port 41624 Jan 13 21:27:27.184736 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:27.194367 systemd[1]: sshd@5-172.24.4.197:22-172.24.4.1:41624.service: Deactivated successfully. Jan 13 21:27:27.197609 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:27:27.199319 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:27:27.210041 systemd[1]: Started sshd@6-172.24.4.197:22-172.24.4.1:41636.service - OpenSSH per-connection server daemon (172.24.4.1:41636). Jan 13 21:27:27.212939 systemd-logind[1450]: Removed session 8. Jan 13 21:27:28.561213 sshd[1659]: Accepted publickey for core from 172.24.4.1 port 41636 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:28.563861 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:28.573389 systemd-logind[1450]: New session 9 of user core. Jan 13 21:27:28.584974 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:27:29.017455 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:27:29.018139 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:29.035430 sudo[1662]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:29.215940 sshd[1661]: Connection closed by 172.24.4.1 port 41636 Jan 13 21:27:29.213777 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:29.226732 systemd[1]: sshd@6-172.24.4.197:22-172.24.4.1:41636.service: Deactivated successfully. Jan 13 21:27:29.229577 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:27:29.232843 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:27:29.240010 systemd[1]: Started sshd@7-172.24.4.197:22-172.24.4.1:41642.service - OpenSSH per-connection server daemon (172.24.4.1:41642). Jan 13 21:27:29.243107 systemd-logind[1450]: Removed session 9. Jan 13 21:27:30.300769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:27:30.311822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:30.459691 sshd[1667]: Accepted publickey for core from 172.24.4.1 port 41642 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:30.463187 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:30.476620 systemd-logind[1450]: New session 10 of user core. Jan 13 21:27:30.481996 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:27:30.694922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:30.708025 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:30.797435 kubelet[1678]: E0113 21:27:30.797339 1678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:30.802009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:30.802382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:30.957112 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:27:30.957851 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:30.965136 sudo[1688]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:30.976307 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 21:27:30.977004 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:31.001157 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:27:31.072348 augenrules[1710]: No rules Jan 13 21:27:31.073471 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:27:31.073932 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:27:31.075905 sudo[1687]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:31.313276 sshd[1672]: Connection closed by 172.24.4.1 port 41642 Jan 13 21:27:31.314792 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:31.328575 systemd[1]: sshd@7-172.24.4.197:22-172.24.4.1:41642.service: Deactivated successfully. Jan 13 21:27:31.332031 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:27:31.335243 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:27:31.342212 systemd[1]: Started sshd@8-172.24.4.197:22-172.24.4.1:41644.service - OpenSSH per-connection server daemon (172.24.4.1:41644). Jan 13 21:27:31.345250 systemd-logind[1450]: Removed session 10. Jan 13 21:27:32.400066 sshd[1718]: Accepted publickey for core from 172.24.4.1 port 41644 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:27:32.402701 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:32.411799 systemd-logind[1450]: New session 11 of user core. Jan 13 21:27:32.420766 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:27:32.977599 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:27:32.978975 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:34.413167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:34.426117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:34.479299 systemd[1]: Reloading requested from client PID 1758 ('systemctl') (unit session-11.scope)... Jan 13 21:27:34.479336 systemd[1]: Reloading... Jan 13 21:27:34.581513 zram_generator::config[1796]: No configuration found. Jan 13 21:27:34.922866 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:35.003105 systemd[1]: Reloading finished in 522 ms. Jan 13 21:27:35.068279 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:27:35.068357 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:27:35.068613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:35.074731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:35.190915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:35.207774 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:27:35.260039 kubelet[1863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:35.260039 kubelet[1863]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:27:35.260039 kubelet[1863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:35.260451 kubelet[1863]: I0113 21:27:35.260084 1863 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:27:35.687646 kubelet[1863]: I0113 21:27:35.687459 1863 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:27:35.687646 kubelet[1863]: I0113 21:27:35.687503 1863 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:27:35.687880 kubelet[1863]: I0113 21:27:35.687733 1863 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:27:35.703299 kubelet[1863]: I0113 21:27:35.703221 1863 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:27:35.721604 kubelet[1863]: I0113 21:27:35.720640 1863 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:27:35.721604 kubelet[1863]: I0113 21:27:35.720890 1863 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:27:35.721604 kubelet[1863]: I0113 21:27:35.720921 1863 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:27:35.721604 kubelet[1863]: I0113 21:27:35.721326 1863 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:27:35.721852 kubelet[1863]: I0113 21:27:35.721337 1863 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:27:35.721852 kubelet[1863]: I0113 21:27:35.721457 1863 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:35.723278 kubelet[1863]: I0113 21:27:35.722995 1863 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:27:35.723278 kubelet[1863]: I0113 21:27:35.723016 1863 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:27:35.723278 kubelet[1863]: I0113 21:27:35.723038 1863 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:27:35.723278 kubelet[1863]: I0113 21:27:35.723053 1863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:27:35.726652 kubelet[1863]: E0113 21:27:35.725861 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:35.726652 kubelet[1863]: E0113 21:27:35.725946 1863 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:35.730173 kubelet[1863]: I0113 21:27:35.729869 1863 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:27:35.733921 kubelet[1863]: I0113 21:27:35.733886 1863 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:27:35.734020 kubelet[1863]: W0113 21:27:35.733995 1863 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:27:35.735440 kubelet[1863]: I0113 21:27:35.735397 1863 server.go:1264] "Started kubelet" Jan 13 21:27:35.735831 kubelet[1863]: W0113 21:27:35.735814 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.197" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:27:35.735914 kubelet[1863]: E0113 21:27:35.735904 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.197" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:27:35.737363 kubelet[1863]: I0113 21:27:35.737347 1863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:27:35.738273 kubelet[1863]: I0113 21:27:35.738214 1863 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:27:35.740851 kubelet[1863]: I0113 21:27:35.740594 1863 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:27:35.742686 kubelet[1863]: I0113 21:27:35.742474 1863 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:27:35.742936 kubelet[1863]: I0113 21:27:35.742900 1863 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:27:35.748397 kubelet[1863]: I0113 21:27:35.747727 1863 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:27:35.748905 kubelet[1863]: I0113 21:27:35.748693 1863 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:27:35.748905 kubelet[1863]: I0113 21:27:35.748814 1863 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:27:35.756316 kubelet[1863]: I0113 21:27:35.753400 1863 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:27:35.756316 kubelet[1863]: I0113 21:27:35.753617 1863 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:27:35.762736 kubelet[1863]: E0113 21:27:35.762656 1863 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.197\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:27:35.762805 kubelet[1863]: W0113 21:27:35.762771 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:27:35.762852 kubelet[1863]: E0113 21:27:35.762812 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:27:35.763114 kubelet[1863]: W0113 21:27:35.762980 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:27:35.763114 kubelet[1863]: E0113 21:27:35.763024 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:27:35.767506 kubelet[1863]: I0113 21:27:35.766723 1863 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:27:36.851990 systemd-timesyncd[1380]: Contacted time server 194.57.169.1:123 (2.flatcar.pool.ntp.org). Jan 13 21:27:36.852498 systemd-timesyncd[1380]: Initial clock synchronization to Mon 2025-01-13 21:27:36.851797 UTC. Jan 13 21:27:36.853522 systemd-resolved[1379]: Clock change detected. Flushing caches. Jan 13 21:27:36.876223 kubelet[1863]: I0113 21:27:36.876059 1863 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:27:36.876223 kubelet[1863]: I0113 21:27:36.876075 1863 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:27:36.876223 kubelet[1863]: I0113 21:27:36.876088 1863 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:36.882168 kubelet[1863]: I0113 21:27:36.882099 1863 policy_none.go:49] "None policy: Start" Jan 13 21:27:36.883673 kubelet[1863]: I0113 21:27:36.883374 1863 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:27:36.883673 kubelet[1863]: I0113 21:27:36.883393 1863 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:27:36.892070 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:27:36.901313 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:27:36.906330 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:27:36.912865 kubelet[1863]: I0113 21:27:36.912847 1863 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:27:36.915149 kubelet[1863]: I0113 21:27:36.915077 1863 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:27:36.915356 kubelet[1863]: I0113 21:27:36.915334 1863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:27:36.918390 kubelet[1863]: E0113 21:27:36.918369 1863 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.197\" not found" Jan 13 21:27:36.924018 kubelet[1863]: I0113 21:27:36.923982 1863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:27:36.926328 kubelet[1863]: I0113 21:27:36.926292 1863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:27:36.926408 kubelet[1863]: I0113 21:27:36.926331 1863 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:27:36.926408 kubelet[1863]: I0113 21:27:36.926356 1863 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:27:36.926463 kubelet[1863]: E0113 21:27:36.926415 1863 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 21:27:36.927168 kubelet[1863]: I0113 21:27:36.926722 1863 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.197" Jan 13 21:27:36.932565 kubelet[1863]: I0113 21:27:36.932514 1863 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.197" Jan 13 21:27:36.948426 kubelet[1863]: E0113 21:27:36.948393 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.050311 kubelet[1863]: E0113 21:27:37.048774 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.149454 kubelet[1863]: E0113 21:27:37.149375 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.250486 kubelet[1863]: E0113 21:27:37.250414 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.351245 kubelet[1863]: E0113 21:27:37.350980 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.403913 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 13 21:27:37.452255 kubelet[1863]: E0113 21:27:37.452187 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.553296 kubelet[1863]: E0113 21:27:37.553219 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.646420 sshd[1720]: Connection closed by 172.24.4.1 port 41644 Jan 13 21:27:37.647698 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:37.653998 kubelet[1863]: E0113 21:27:37.653744 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.654550 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:27:37.655991 systemd[1]: sshd@8-172.24.4.197:22-172.24.4.1:41644.service: Deactivated successfully. Jan 13 21:27:37.659463 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:27:37.662061 systemd-logind[1450]: Removed session 11. Jan 13 21:27:37.755026 kubelet[1863]: E0113 21:27:37.754910 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.767311 kubelet[1863]: I0113 21:27:37.767203 1863 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:27:37.767747 kubelet[1863]: W0113 21:27:37.767540 1863 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:27:37.803789 kubelet[1863]: E0113 21:27:37.803669 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:37.855236 kubelet[1863]: E0113 21:27:37.855108 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:37.956572 kubelet[1863]: E0113 21:27:37.956349 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.197\" not found" Jan 13 21:27:38.059231 kubelet[1863]: I0113 21:27:38.059007 1863 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:27:38.060441 containerd[1472]: time="2025-01-13T21:27:38.060083259Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:27:38.061058 kubelet[1863]: I0113 21:27:38.060596 1863 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:27:38.804524 kubelet[1863]: E0113 21:27:38.804447 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:38.806458 kubelet[1863]: I0113 21:27:38.806393 1863 apiserver.go:52] "Watching apiserver" Jan 13 21:27:38.844995 kubelet[1863]: I0113 21:27:38.844854 1863 topology_manager.go:215] "Topology Admit Handler" podUID="4beb52c5-148d-4c20-9b34-55c09a85b7c9" podNamespace="calico-system" podName="calico-node-6vggm" Jan 13 21:27:38.845252 kubelet[1863]: I0113 21:27:38.845043 1863 topology_manager.go:215] "Topology Admit Handler" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" podNamespace="calico-system" podName="csi-node-driver-d7zd4" Jan 13 21:27:38.845252 kubelet[1863]: I0113 21:27:38.845220 1863 topology_manager.go:215] "Topology Admit Handler" podUID="652a3a86-e2d5-40f8-a8ad-3e1f946935d4" podNamespace="kube-system" podName="kube-proxy-48d2w" Jan 13 21:27:38.847031 kubelet[1863]: E0113 21:27:38.845544 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:38.865890 systemd[1]: Created slice kubepods-besteffort-pod4beb52c5_148d_4c20_9b34_55c09a85b7c9.slice - libcontainer container kubepods-besteffort-pod4beb52c5_148d_4c20_9b34_55c09a85b7c9.slice. Jan 13 21:27:38.885564 systemd[1]: Created slice kubepods-besteffort-pod652a3a86_e2d5_40f8_a8ad_3e1f946935d4.slice - libcontainer container kubepods-besteffort-pod652a3a86_e2d5_40f8_a8ad_3e1f946935d4.slice. Jan 13 21:27:38.930321 kubelet[1863]: I0113 21:27:38.930219 1863 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:27:38.939938 kubelet[1863]: I0113 21:27:38.938765 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/652a3a86-e2d5-40f8-a8ad-3e1f946935d4-lib-modules\") pod \"kube-proxy-48d2w\" (UID: \"652a3a86-e2d5-40f8-a8ad-3e1f946935d4\") " pod="kube-system/kube-proxy-48d2w" Jan 13 21:27:38.939938 kubelet[1863]: I0113 21:27:38.938845 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-flexvol-driver-host\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.939938 kubelet[1863]: I0113 21:27:38.938904 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w45b\" (UniqueName: \"kubernetes.io/projected/bd59e032-03c6-4c4d-bd4a-80c72aff8c72-kube-api-access-6w45b\") pod \"csi-node-driver-d7zd4\" (UID: \"bd59e032-03c6-4c4d-bd4a-80c72aff8c72\") " pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:38.939938 kubelet[1863]: I0113 21:27:38.938950 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-var-run-calico\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.939938 kubelet[1863]: I0113 21:27:38.938994 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-cni-bin-dir\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.940648 kubelet[1863]: I0113 21:27:38.939035 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-cni-net-dir\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.940648 kubelet[1863]: I0113 21:27:38.939079 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd59e032-03c6-4c4d-bd4a-80c72aff8c72-kubelet-dir\") pod \"csi-node-driver-d7zd4\" (UID: \"bd59e032-03c6-4c4d-bd4a-80c72aff8c72\") " pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:38.940648 kubelet[1863]: I0113 21:27:38.939160 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4g7p\" (UniqueName: \"kubernetes.io/projected/652a3a86-e2d5-40f8-a8ad-3e1f946935d4-kube-api-access-l4g7p\") pod \"kube-proxy-48d2w\" (UID: \"652a3a86-e2d5-40f8-a8ad-3e1f946935d4\") " pod="kube-system/kube-proxy-48d2w" Jan 13 21:27:38.940648 kubelet[1863]: I0113 21:27:38.939208 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-policysync\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.940648 kubelet[1863]: I0113 21:27:38.939250 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4beb52c5-148d-4c20-9b34-55c09a85b7c9-tigera-ca-bundle\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.940942 kubelet[1863]: I0113 21:27:38.939292 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bd59e032-03c6-4c4d-bd4a-80c72aff8c72-socket-dir\") pod \"csi-node-driver-d7zd4\" (UID: \"bd59e032-03c6-4c4d-bd4a-80c72aff8c72\") " pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:38.940942 kubelet[1863]: I0113 21:27:38.939334 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/652a3a86-e2d5-40f8-a8ad-3e1f946935d4-kube-proxy\") pod \"kube-proxy-48d2w\" (UID: \"652a3a86-e2d5-40f8-a8ad-3e1f946935d4\") " pod="kube-system/kube-proxy-48d2w" Jan 13 21:27:38.940942 kubelet[1863]: I0113 21:27:38.939373 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/652a3a86-e2d5-40f8-a8ad-3e1f946935d4-xtables-lock\") pod \"kube-proxy-48d2w\" (UID: \"652a3a86-e2d5-40f8-a8ad-3e1f946935d4\") " pod="kube-system/kube-proxy-48d2w" Jan 13 21:27:38.940942 kubelet[1863]: I0113 21:27:38.939426 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-lib-modules\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.940942 kubelet[1863]: I0113 21:27:38.939468 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-cni-log-dir\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.941262 kubelet[1863]: I0113 21:27:38.939509 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-var-lib-calico\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.941262 kubelet[1863]: I0113 21:27:38.939553 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sv7l\" (UniqueName: \"kubernetes.io/projected/4beb52c5-148d-4c20-9b34-55c09a85b7c9-kube-api-access-4sv7l\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.941262 kubelet[1863]: I0113 21:27:38.939599 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bd59e032-03c6-4c4d-bd4a-80c72aff8c72-varrun\") pod \"csi-node-driver-d7zd4\" (UID: \"bd59e032-03c6-4c4d-bd4a-80c72aff8c72\") " pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:38.941262 kubelet[1863]: I0113 21:27:38.939646 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bd59e032-03c6-4c4d-bd4a-80c72aff8c72-registration-dir\") pod \"csi-node-driver-d7zd4\" (UID: \"bd59e032-03c6-4c4d-bd4a-80c72aff8c72\") " pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:38.941262 kubelet[1863]: I0113 21:27:38.939690 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4beb52c5-148d-4c20-9b34-55c09a85b7c9-xtables-lock\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:38.941531 kubelet[1863]: I0113 21:27:38.939731 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4beb52c5-148d-4c20-9b34-55c09a85b7c9-node-certs\") pod \"calico-node-6vggm\" (UID: \"4beb52c5-148d-4c20-9b34-55c09a85b7c9\") " pod="calico-system/calico-node-6vggm" Jan 13 21:27:39.047208 kubelet[1863]: E0113 21:27:39.046426 1863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:39.047208 kubelet[1863]: W0113 21:27:39.046471 1863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:39.047208 kubelet[1863]: E0113 21:27:39.046518 1863 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:39.093705 kubelet[1863]: E0113 21:27:39.090895 1863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:39.093705 kubelet[1863]: W0113 21:27:39.090942 1863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:39.093705 kubelet[1863]: E0113 21:27:39.090999 1863 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:39.097275 kubelet[1863]: E0113 21:27:39.097243 1863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:39.097642 kubelet[1863]: W0113 21:27:39.097609 1863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:39.098220 kubelet[1863]: E0113 21:27:39.097777 1863 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:39.104204 kubelet[1863]: E0113 21:27:39.100243 1863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:39.104204 kubelet[1863]: W0113 21:27:39.100286 1863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:39.104204 kubelet[1863]: E0113 21:27:39.100319 1863 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:39.111267 kubelet[1863]: E0113 21:27:39.111218 1863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:27:39.111267 kubelet[1863]: W0113 21:27:39.111253 1863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:27:39.111544 kubelet[1863]: E0113 21:27:39.111297 1863 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:27:39.179357 containerd[1472]: time="2025-01-13T21:27:39.179100386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6vggm,Uid:4beb52c5-148d-4c20-9b34-55c09a85b7c9,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:39.194327 containerd[1472]: time="2025-01-13T21:27:39.193612854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48d2w,Uid:652a3a86-e2d5-40f8-a8ad-3e1f946935d4,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:39.805160 kubelet[1863]: E0113 21:27:39.805051 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:39.897845 containerd[1472]: time="2025-01-13T21:27:39.897497960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.901051 containerd[1472]: time="2025-01-13T21:27:39.900965954Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.903024 containerd[1472]: time="2025-01-13T21:27:39.902857893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 21:27:39.906079 containerd[1472]: time="2025-01-13T21:27:39.905481553Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.906079 containerd[1472]: time="2025-01-13T21:27:39.906017889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:27:39.910359 containerd[1472]: time="2025-01-13T21:27:39.910259825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:27:39.914651 containerd[1472]: time="2025-01-13T21:27:39.914300203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 720.499156ms" Jan 13 21:27:39.920001 containerd[1472]: time="2025-01-13T21:27:39.919688208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 740.330099ms" Jan 13 21:27:40.064481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586770613.mount: Deactivated successfully. Jan 13 21:27:40.345615 containerd[1472]: time="2025-01-13T21:27:40.345311264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:40.345615 containerd[1472]: time="2025-01-13T21:27:40.345489628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:40.346826 containerd[1472]: time="2025-01-13T21:27:40.346592757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:40.349811 containerd[1472]: time="2025-01-13T21:27:40.349392378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:40.372080 containerd[1472]: time="2025-01-13T21:27:40.369603724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:40.372080 containerd[1472]: time="2025-01-13T21:27:40.369698843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:40.372080 containerd[1472]: time="2025-01-13T21:27:40.369731474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:40.372080 containerd[1472]: time="2025-01-13T21:27:40.371651785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:40.657369 systemd[1]: Started cri-containerd-1ab5c84693f132d49e659c73d174f233fdfd664fa8eba53765e2ddc517ea6aac.scope - libcontainer container 1ab5c84693f132d49e659c73d174f233fdfd664fa8eba53765e2ddc517ea6aac. Jan 13 21:27:40.658555 systemd[1]: Started cri-containerd-9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0.scope - libcontainer container 9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0. Jan 13 21:27:40.703722 containerd[1472]: time="2025-01-13T21:27:40.703684362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6vggm,Uid:4beb52c5-148d-4c20-9b34-55c09a85b7c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0\"" Jan 13 21:27:40.707575 containerd[1472]: time="2025-01-13T21:27:40.707425158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:27:40.718586 containerd[1472]: time="2025-01-13T21:27:40.718547849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48d2w,Uid:652a3a86-e2d5-40f8-a8ad-3e1f946935d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ab5c84693f132d49e659c73d174f233fdfd664fa8eba53765e2ddc517ea6aac\"" Jan 13 21:27:40.805484 kubelet[1863]: E0113 21:27:40.805420 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:40.927737 kubelet[1863]: E0113 21:27:40.926865 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:41.805677 kubelet[1863]: E0113 21:27:41.805576 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:42.437412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461824884.mount: Deactivated successfully. Jan 13 21:27:42.586906 containerd[1472]: time="2025-01-13T21:27:42.586851712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:42.588152 containerd[1472]: time="2025-01-13T21:27:42.588043057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:27:42.589483 containerd[1472]: time="2025-01-13T21:27:42.589408257Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:42.592996 containerd[1472]: time="2025-01-13T21:27:42.592947104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:42.594651 containerd[1472]: time="2025-01-13T21:27:42.593992405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.8864148s" Jan 13 21:27:42.594651 containerd[1472]: time="2025-01-13T21:27:42.594043070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:27:42.596395 containerd[1472]: time="2025-01-13T21:27:42.596143319Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:27:42.597909 containerd[1472]: time="2025-01-13T21:27:42.597663851Z" level=info msg="CreateContainer within sandbox \"9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:27:42.618455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875113057.mount: Deactivated successfully. Jan 13 21:27:42.633085 containerd[1472]: time="2025-01-13T21:27:42.633017306Z" level=info msg="CreateContainer within sandbox \"9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c\"" Jan 13 21:27:42.633971 containerd[1472]: time="2025-01-13T21:27:42.633933955Z" level=info msg="StartContainer for \"335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c\"" Jan 13 21:27:42.678362 systemd[1]: Started cri-containerd-335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c.scope - libcontainer container 335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c. Jan 13 21:27:42.712604 containerd[1472]: time="2025-01-13T21:27:42.712344474Z" level=info msg="StartContainer for \"335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c\" returns successfully" Jan 13 21:27:42.721010 systemd[1]: cri-containerd-335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c.scope: Deactivated successfully. Jan 13 21:27:42.806803 kubelet[1863]: E0113 21:27:42.806749 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:42.848279 containerd[1472]: time="2025-01-13T21:27:42.847865283Z" level=info msg="shim disconnected" id=335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c namespace=k8s.io Jan 13 21:27:42.848279 containerd[1472]: time="2025-01-13T21:27:42.847960352Z" level=warning msg="cleaning up after shim disconnected" id=335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c namespace=k8s.io Jan 13 21:27:42.848279 containerd[1472]: time="2025-01-13T21:27:42.847984056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:42.929018 kubelet[1863]: E0113 21:27:42.928951 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:43.377611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-335aba3dd5667b55734d8a95129f67bfb4379b1b29ff4cf234d3c6974285982c-rootfs.mount: Deactivated successfully. Jan 13 21:27:43.808416 kubelet[1863]: E0113 21:27:43.808271 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:43.982675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857517483.mount: Deactivated successfully. Jan 13 21:27:44.507458 containerd[1472]: time="2025-01-13T21:27:44.507394550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:44.509151 containerd[1472]: time="2025-01-13T21:27:44.508914210Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 21:27:44.510244 containerd[1472]: time="2025-01-13T21:27:44.510203228Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:44.513239 containerd[1472]: time="2025-01-13T21:27:44.513167577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:44.514162 containerd[1472]: time="2025-01-13T21:27:44.513828236Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.917645403s" Jan 13 21:27:44.514162 containerd[1472]: time="2025-01-13T21:27:44.513861188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:27:44.516055 containerd[1472]: time="2025-01-13T21:27:44.516033863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:27:44.517353 containerd[1472]: time="2025-01-13T21:27:44.517316649Z" level=info msg="CreateContainer within sandbox \"1ab5c84693f132d49e659c73d174f233fdfd664fa8eba53765e2ddc517ea6aac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:27:44.533761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885725646.mount: Deactivated successfully. Jan 13 21:27:44.547158 containerd[1472]: time="2025-01-13T21:27:44.546804463Z" level=info msg="CreateContainer within sandbox \"1ab5c84693f132d49e659c73d174f233fdfd664fa8eba53765e2ddc517ea6aac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"493fc452686464424791ffe1a7099cc2ffb5040d78e68735b5ee6bceeee7b960\"" Jan 13 21:27:44.547856 containerd[1472]: time="2025-01-13T21:27:44.547769243Z" level=info msg="StartContainer for \"493fc452686464424791ffe1a7099cc2ffb5040d78e68735b5ee6bceeee7b960\"" Jan 13 21:27:44.579562 systemd[1]: Started cri-containerd-493fc452686464424791ffe1a7099cc2ffb5040d78e68735b5ee6bceeee7b960.scope - libcontainer container 493fc452686464424791ffe1a7099cc2ffb5040d78e68735b5ee6bceeee7b960. Jan 13 21:27:44.615937 containerd[1472]: time="2025-01-13T21:27:44.615811817Z" level=info msg="StartContainer for \"493fc452686464424791ffe1a7099cc2ffb5040d78e68735b5ee6bceeee7b960\" returns successfully" Jan 13 21:27:44.809741 kubelet[1863]: E0113 21:27:44.809547 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:44.929190 kubelet[1863]: E0113 21:27:44.928694 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:44.980046 kubelet[1863]: I0113 21:27:44.979111 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-48d2w" podStartSLOduration=5.1833795 podStartE2EDuration="8.979079134s" podCreationTimestamp="2025-01-13 21:27:36 +0000 UTC" firstStartedPulling="2025-01-13 21:27:40.719615351 +0000 UTC m=+4.430593658" lastFinishedPulling="2025-01-13 21:27:44.515314985 +0000 UTC m=+8.226293292" observedRunningTime="2025-01-13 21:27:44.978951305 +0000 UTC m=+8.689929652" watchObservedRunningTime="2025-01-13 21:27:44.979079134 +0000 UTC m=+8.690057491" Jan 13 21:27:45.810745 kubelet[1863]: E0113 21:27:45.810660 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:46.811379 kubelet[1863]: E0113 21:27:46.811317 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:46.931738 kubelet[1863]: E0113 21:27:46.931499 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:47.811803 kubelet[1863]: E0113 21:27:47.811698 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:48.812290 kubelet[1863]: E0113 21:27:48.812234 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:48.927972 kubelet[1863]: E0113 21:27:48.927723 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:49.610263 update_engine[1455]: I20250113 21:27:49.610208 1455 update_attempter.cc:509] Updating boot flags... Jan 13 21:27:49.651288 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2255) Jan 13 21:27:49.729196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2253) Jan 13 21:27:49.813304 kubelet[1863]: E0113 21:27:49.813235 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:49.994433 containerd[1472]: time="2025-01-13T21:27:49.994379829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:49.995780 containerd[1472]: time="2025-01-13T21:27:49.995554262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:27:49.997302 containerd[1472]: time="2025-01-13T21:27:49.996972442Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:49.999750 containerd[1472]: time="2025-01-13T21:27:49.999718592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:50.000469 containerd[1472]: time="2025-01-13T21:27:50.000438162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.484041087s" Jan 13 21:27:50.000517 containerd[1472]: time="2025-01-13T21:27:50.000467216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:27:50.003051 containerd[1472]: time="2025-01-13T21:27:50.003022339Z" level=info msg="CreateContainer within sandbox \"9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:27:50.017856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822908769.mount: Deactivated successfully. Jan 13 21:27:50.027851 containerd[1472]: time="2025-01-13T21:27:50.027795310Z" level=info msg="CreateContainer within sandbox \"9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832\"" Jan 13 21:27:50.028454 containerd[1472]: time="2025-01-13T21:27:50.028403952Z" level=info msg="StartContainer for \"a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832\"" Jan 13 21:27:50.083289 systemd[1]: Started cri-containerd-a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832.scope - libcontainer container a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832. Jan 13 21:27:50.112389 containerd[1472]: time="2025-01-13T21:27:50.112260354Z" level=info msg="StartContainer for \"a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832\" returns successfully" Jan 13 21:27:50.814433 kubelet[1863]: E0113 21:27:50.814292 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:50.928423 kubelet[1863]: E0113 21:27:50.928325 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:51.252414 containerd[1472]: time="2025-01-13T21:27:51.252301792Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:27:51.257893 systemd[1]: cri-containerd-a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832.scope: Deactivated successfully. Jan 13 21:27:51.305978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832-rootfs.mount: Deactivated successfully. Jan 13 21:27:51.309955 kubelet[1863]: I0113 21:27:51.308445 1863 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:27:51.815075 kubelet[1863]: E0113 21:27:51.815001 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:52.497801 containerd[1472]: time="2025-01-13T21:27:52.497660492Z" level=info msg="shim disconnected" id=a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832 namespace=k8s.io Jan 13 21:27:52.497801 containerd[1472]: time="2025-01-13T21:27:52.497766621Z" level=warning msg="cleaning up after shim disconnected" id=a0be6b39637e2f9fbc36c09b378d790f30cef208ecceeebeea553f8bb2da4832 namespace=k8s.io Jan 13 21:27:52.497801 containerd[1472]: time="2025-01-13T21:27:52.497788041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:52.815442 kubelet[1863]: E0113 21:27:52.815270 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:52.942516 systemd[1]: Created slice kubepods-besteffort-podbd59e032_03c6_4c4d_bd4a_80c72aff8c72.slice - libcontainer container kubepods-besteffort-podbd59e032_03c6_4c4d_bd4a_80c72aff8c72.slice. Jan 13 21:27:52.947638 containerd[1472]: time="2025-01-13T21:27:52.947555159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:0,}" Jan 13 21:27:53.001761 containerd[1472]: time="2025-01-13T21:27:53.001619236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:27:53.091343 containerd[1472]: time="2025-01-13T21:27:53.091241853Z" level=error msg="Failed to destroy network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:53.093003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52-shm.mount: Deactivated successfully. Jan 13 21:27:53.093632 containerd[1472]: time="2025-01-13T21:27:53.093520647Z" level=error msg="encountered an error cleaning up failed sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:53.093632 containerd[1472]: time="2025-01-13T21:27:53.093587483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:53.093969 kubelet[1863]: E0113 21:27:53.093929 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:53.095093 kubelet[1863]: E0113 21:27:53.094077 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:53.095093 kubelet[1863]: E0113 21:27:53.094149 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:53.095093 kubelet[1863]: E0113 21:27:53.094200 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:53.816074 kubelet[1863]: E0113 21:27:53.815961 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:54.003483 kubelet[1863]: I0113 21:27:54.003401 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52" Jan 13 21:27:54.004977 containerd[1472]: time="2025-01-13T21:27:54.004869031Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:27:54.005766 containerd[1472]: time="2025-01-13T21:27:54.005313044Z" level=info msg="Ensure that sandbox fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52 in task-service has been cleanup successfully" Jan 13 21:27:54.005766 containerd[1472]: time="2025-01-13T21:27:54.005672328Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:27:54.005766 containerd[1472]: time="2025-01-13T21:27:54.005729174Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:27:54.009332 containerd[1472]: time="2025-01-13T21:27:54.008745862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:1,}" Jan 13 21:27:54.009504 systemd[1]: run-netns-cni\x2d390fd4a0\x2dad86\x2df3a0\x2ddf2b\x2d80a3c76e3241.mount: Deactivated successfully. Jan 13 21:27:54.148605 containerd[1472]: time="2025-01-13T21:27:54.148419340Z" level=error msg="Failed to destroy network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:54.150940 containerd[1472]: time="2025-01-13T21:27:54.149528580Z" level=error msg="encountered an error cleaning up failed sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:54.150940 containerd[1472]: time="2025-01-13T21:27:54.149722664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:54.151081 kubelet[1863]: E0113 21:27:54.150168 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:54.151081 kubelet[1863]: E0113 21:27:54.150230 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:54.151081 kubelet[1863]: E0113 21:27:54.150253 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:54.151208 kubelet[1863]: E0113 21:27:54.150293 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:54.154011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a-shm.mount: Deactivated successfully. Jan 13 21:27:54.816456 kubelet[1863]: E0113 21:27:54.816343 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:55.008193 kubelet[1863]: I0113 21:27:55.007393 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a" Jan 13 21:27:55.009102 containerd[1472]: time="2025-01-13T21:27:55.008568469Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:27:55.009102 containerd[1472]: time="2025-01-13T21:27:55.008904048Z" level=info msg="Ensure that sandbox 48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a in task-service has been cleanup successfully" Jan 13 21:27:55.013804 containerd[1472]: time="2025-01-13T21:27:55.011273562Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:27:55.013804 containerd[1472]: time="2025-01-13T21:27:55.011317214Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:27:55.013804 containerd[1472]: time="2025-01-13T21:27:55.012332098Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:27:55.013804 containerd[1472]: time="2025-01-13T21:27:55.012470668Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:27:55.013804 containerd[1472]: time="2025-01-13T21:27:55.012495133Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:27:55.014591 systemd[1]: run-netns-cni\x2d5088fccf\x2d8d30\x2d51ce\x2d6682\x2d50612d1fa0d5.mount: Deactivated successfully. Jan 13 21:27:55.019452 containerd[1472]: time="2025-01-13T21:27:55.017429979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:2,}" Jan 13 21:27:55.137726 containerd[1472]: time="2025-01-13T21:27:55.137607171Z" level=error msg="Failed to destroy network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:55.139397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3-shm.mount: Deactivated successfully. Jan 13 21:27:55.139904 containerd[1472]: time="2025-01-13T21:27:55.139875205Z" level=error msg="encountered an error cleaning up failed sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:55.140112 containerd[1472]: time="2025-01-13T21:27:55.140004227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:55.140856 kubelet[1863]: E0113 21:27:55.140793 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:55.141288 kubelet[1863]: E0113 21:27:55.140854 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:55.141288 kubelet[1863]: E0113 21:27:55.140891 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:55.141288 kubelet[1863]: E0113 21:27:55.140942 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:55.816985 kubelet[1863]: E0113 21:27:55.816860 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:56.014830 kubelet[1863]: I0113 21:27:56.012651 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3" Jan 13 21:27:56.015052 containerd[1472]: time="2025-01-13T21:27:56.014190613Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:27:56.015052 containerd[1472]: time="2025-01-13T21:27:56.014594250Z" level=info msg="Ensure that sandbox 97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3 in task-service has been cleanup successfully" Jan 13 21:27:56.018910 containerd[1472]: time="2025-01-13T21:27:56.018352529Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:27:56.018910 containerd[1472]: time="2025-01-13T21:27:56.018397804Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:27:56.020934 systemd[1]: run-netns-cni\x2d172f2f7f\x2d0b08\x2dd5b5\x2d21a4\x2d7155992ad3cc.mount: Deactivated successfully. Jan 13 21:27:56.025247 containerd[1472]: time="2025-01-13T21:27:56.023328221Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:27:56.025247 containerd[1472]: time="2025-01-13T21:27:56.023529568Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:27:56.025247 containerd[1472]: time="2025-01-13T21:27:56.023559344Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:27:56.025904 containerd[1472]: time="2025-01-13T21:27:56.025858977Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:27:56.027329 containerd[1472]: time="2025-01-13T21:27:56.026458532Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:27:56.027329 containerd[1472]: time="2025-01-13T21:27:56.026498006Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:27:56.027808 containerd[1472]: time="2025-01-13T21:27:56.027761896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:3,}" Jan 13 21:27:56.070676 kubelet[1863]: I0113 21:27:56.069852 1863 topology_manager.go:215] "Topology Admit Handler" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" podNamespace="default" podName="nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:56.095065 systemd[1]: Created slice kubepods-besteffort-pod0cb30184_6c95_43d3_a7ac_5e7dfdbc75ec.slice - libcontainer container kubepods-besteffort-pod0cb30184_6c95_43d3_a7ac_5e7dfdbc75ec.slice. Jan 13 21:27:56.157667 kubelet[1863]: I0113 21:27:56.157625 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v65d\" (UniqueName: \"kubernetes.io/projected/0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec-kube-api-access-9v65d\") pod \"nginx-deployment-85f456d6dd-vt9gg\" (UID: \"0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec\") " pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:56.169778 containerd[1472]: time="2025-01-13T21:27:56.169729747Z" level=error msg="Failed to destroy network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.170135 containerd[1472]: time="2025-01-13T21:27:56.170056189Z" level=error msg="encountered an error cleaning up failed sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.170194 containerd[1472]: time="2025-01-13T21:27:56.170150646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.172160 kubelet[1863]: E0113 21:27:56.170371 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.172160 kubelet[1863]: E0113 21:27:56.170434 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:56.172160 kubelet[1863]: E0113 21:27:56.170459 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:56.172275 kubelet[1863]: E0113 21:27:56.170532 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:56.172434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace-shm.mount: Deactivated successfully. Jan 13 21:27:56.401712 containerd[1472]: time="2025-01-13T21:27:56.401550366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:0,}" Jan 13 21:27:56.539968 containerd[1472]: time="2025-01-13T21:27:56.539923285Z" level=error msg="Failed to destroy network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.541021 containerd[1472]: time="2025-01-13T21:27:56.540405419Z" level=error msg="encountered an error cleaning up failed sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.541021 containerd[1472]: time="2025-01-13T21:27:56.540461925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.541204 kubelet[1863]: E0113 21:27:56.540647 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:56.541204 kubelet[1863]: E0113 21:27:56.540703 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:56.541204 kubelet[1863]: E0113 21:27:56.540724 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:56.541296 kubelet[1863]: E0113 21:27:56.540765 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-vt9gg" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" Jan 13 21:27:56.800817 kubelet[1863]: E0113 21:27:56.800778 1863 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:56.817313 kubelet[1863]: E0113 21:27:56.817285 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:57.020070 kubelet[1863]: I0113 21:27:57.019557 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae" Jan 13 21:27:57.021095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae-shm.mount: Deactivated successfully. Jan 13 21:27:57.028175 containerd[1472]: time="2025-01-13T21:27:57.024805892Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:27:57.028175 containerd[1472]: time="2025-01-13T21:27:57.025256637Z" level=info msg="Ensure that sandbox a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae in task-service has been cleanup successfully" Jan 13 21:27:57.034496 containerd[1472]: time="2025-01-13T21:27:57.029055201Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:27:57.034496 containerd[1472]: time="2025-01-13T21:27:57.029098633Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:27:57.032883 systemd[1]: run-netns-cni\x2d41f4cb42\x2d4554\x2da94e\x2dc352\x2dcd56687da2bc.mount: Deactivated successfully. Jan 13 21:27:57.038284 containerd[1472]: time="2025-01-13T21:27:57.036609930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:1,}" Jan 13 21:27:57.055996 kubelet[1863]: I0113 21:27:57.055817 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace" Jan 13 21:27:57.059851 containerd[1472]: time="2025-01-13T21:27:57.059797889Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:27:57.060043 containerd[1472]: time="2025-01-13T21:27:57.060010127Z" level=info msg="Ensure that sandbox 701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace in task-service has been cleanup successfully" Jan 13 21:27:57.063281 containerd[1472]: time="2025-01-13T21:27:57.063218855Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:27:57.063281 containerd[1472]: time="2025-01-13T21:27:57.063241367Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:27:57.065736 systemd[1]: run-netns-cni\x2d74906b47\x2d280f\x2d4e7d\x2deca0\x2d799404b735dc.mount: Deactivated successfully. Jan 13 21:27:57.069599 containerd[1472]: time="2025-01-13T21:27:57.068974841Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:27:57.074167 containerd[1472]: time="2025-01-13T21:27:57.070190360Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:27:57.074167 containerd[1472]: time="2025-01-13T21:27:57.070231337Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:27:57.077559 containerd[1472]: time="2025-01-13T21:27:57.077509678Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:27:57.078711 containerd[1472]: time="2025-01-13T21:27:57.078630610Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:27:57.081241 containerd[1472]: time="2025-01-13T21:27:57.081198987Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:27:57.083234 containerd[1472]: time="2025-01-13T21:27:57.083187647Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:27:57.083387 containerd[1472]: time="2025-01-13T21:27:57.083278136Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:27:57.083387 containerd[1472]: time="2025-01-13T21:27:57.083292343Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:27:57.087795 containerd[1472]: time="2025-01-13T21:27:57.087677127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:4,}" Jan 13 21:27:57.215294 containerd[1472]: time="2025-01-13T21:27:57.215212509Z" level=error msg="Failed to destroy network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.215568 containerd[1472]: time="2025-01-13T21:27:57.215539963Z" level=error msg="encountered an error cleaning up failed sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.215631 containerd[1472]: time="2025-01-13T21:27:57.215602691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.216185 kubelet[1863]: E0113 21:27:57.215801 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.216185 kubelet[1863]: E0113 21:27:57.215873 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:57.216185 kubelet[1863]: E0113 21:27:57.215897 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:57.216312 kubelet[1863]: E0113 21:27:57.215939 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:57.219131 containerd[1472]: time="2025-01-13T21:27:57.219073280Z" level=error msg="Failed to destroy network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.219757 containerd[1472]: time="2025-01-13T21:27:57.219383232Z" level=error msg="encountered an error cleaning up failed sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.219757 containerd[1472]: time="2025-01-13T21:27:57.219430931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.219946 kubelet[1863]: E0113 21:27:57.219593 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:57.219946 kubelet[1863]: E0113 21:27:57.219659 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:57.219946 kubelet[1863]: E0113 21:27:57.219681 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:57.220042 kubelet[1863]: E0113 21:27:57.219738 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-vt9gg" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" Jan 13 21:27:57.817572 kubelet[1863]: E0113 21:27:57.817458 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:58.019749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8-shm.mount: Deactivated successfully. Jan 13 21:27:58.059739 kubelet[1863]: I0113 21:27:58.059659 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3" Jan 13 21:27:58.060390 containerd[1472]: time="2025-01-13T21:27:58.060364462Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:27:58.061413 containerd[1472]: time="2025-01-13T21:27:58.061344399Z" level=info msg="Ensure that sandbox b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3 in task-service has been cleanup successfully" Jan 13 21:27:58.062715 containerd[1472]: time="2025-01-13T21:27:58.062642193Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:27:58.062715 containerd[1472]: time="2025-01-13T21:27:58.062663864Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:27:58.063954 systemd[1]: run-netns-cni\x2d91272864\x2dd835\x2dce0e\x2d8a52\x2d65029dc89d77.mount: Deactivated successfully. Jan 13 21:27:58.065323 containerd[1472]: time="2025-01-13T21:27:58.065300930Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:27:58.065570 containerd[1472]: time="2025-01-13T21:27:58.065509792Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:27:58.065657 containerd[1472]: time="2025-01-13T21:27:58.065643152Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:27:58.066638 containerd[1472]: time="2025-01-13T21:27:58.066616648Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:27:58.066940 containerd[1472]: time="2025-01-13T21:27:58.066849344Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:27:58.066940 containerd[1472]: time="2025-01-13T21:27:58.066865965Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:27:58.067065 kubelet[1863]: I0113 21:27:58.067039 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8" Jan 13 21:27:58.067402 containerd[1472]: time="2025-01-13T21:27:58.067329104Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:27:58.067762 containerd[1472]: time="2025-01-13T21:27:58.067567561Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:27:58.067762 containerd[1472]: time="2025-01-13T21:27:58.067583090Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:27:58.068303 containerd[1472]: time="2025-01-13T21:27:58.068009490Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:27:58.068303 containerd[1472]: time="2025-01-13T21:27:58.068082557Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:27:58.068303 containerd[1472]: time="2025-01-13T21:27:58.068093738Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:27:58.068303 containerd[1472]: time="2025-01-13T21:27:58.068176573Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:27:58.069727 containerd[1472]: time="2025-01-13T21:27:58.069442598Z" level=info msg="Ensure that sandbox 6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8 in task-service has been cleanup successfully" Jan 13 21:27:58.069727 containerd[1472]: time="2025-01-13T21:27:58.069628627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:5,}" Jan 13 21:27:58.069920 containerd[1472]: time="2025-01-13T21:27:58.069903392Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:27:58.069991 containerd[1472]: time="2025-01-13T21:27:58.069974786Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:27:58.072167 systemd[1]: run-netns-cni\x2d9fc38c11\x2d769e\x2df7b6\x2d4c6b\x2d52e1ec0953b8.mount: Deactivated successfully. Jan 13 21:27:58.075146 containerd[1472]: time="2025-01-13T21:27:58.074511424Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:27:58.075146 containerd[1472]: time="2025-01-13T21:27:58.074597245Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:27:58.075146 containerd[1472]: time="2025-01-13T21:27:58.074609819Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:27:58.077223 containerd[1472]: time="2025-01-13T21:27:58.076985925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:2,}" Jan 13 21:27:58.192981 containerd[1472]: time="2025-01-13T21:27:58.192923736Z" level=error msg="Failed to destroy network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.195033 containerd[1472]: time="2025-01-13T21:27:58.195003237Z" level=error msg="encountered an error cleaning up failed sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.195090 containerd[1472]: time="2025-01-13T21:27:58.195067537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.195441 kubelet[1863]: E0113 21:27:58.195386 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.195558 kubelet[1863]: E0113 21:27:58.195539 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:58.195674 kubelet[1863]: E0113 21:27:58.195657 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:58.195789 kubelet[1863]: E0113 21:27:58.195761 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:58.211739 containerd[1472]: time="2025-01-13T21:27:58.211683931Z" level=error msg="Failed to destroy network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.212012 containerd[1472]: time="2025-01-13T21:27:58.211984836Z" level=error msg="encountered an error cleaning up failed sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.212070 containerd[1472]: time="2025-01-13T21:27:58.212044137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.212308 kubelet[1863]: E0113 21:27:58.212258 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:58.212370 kubelet[1863]: E0113 21:27:58.212334 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:58.212370 kubelet[1863]: E0113 21:27:58.212358 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:58.212440 kubelet[1863]: E0113 21:27:58.212404 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-vt9gg" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" Jan 13 21:27:58.818152 kubelet[1863]: E0113 21:27:58.818048 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:59.018872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a-shm.mount: Deactivated successfully. Jan 13 21:27:59.073762 kubelet[1863]: I0113 21:27:59.073536 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260" Jan 13 21:27:59.076739 containerd[1472]: time="2025-01-13T21:27:59.074744431Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:27:59.076739 containerd[1472]: time="2025-01-13T21:27:59.075138931Z" level=info msg="Ensure that sandbox 25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260 in task-service has been cleanup successfully" Jan 13 21:27:59.076739 containerd[1472]: time="2025-01-13T21:27:59.075315893Z" level=info msg="TearDown network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" successfully" Jan 13 21:27:59.076739 containerd[1472]: time="2025-01-13T21:27:59.075331672Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" returns successfully" Jan 13 21:27:59.077015 systemd[1]: run-netns-cni\x2d99be3e06\x2d73a1\x2d655d\x2d5bac\x2d5b12b26e2652.mount: Deactivated successfully. Jan 13 21:27:59.078532 containerd[1472]: time="2025-01-13T21:27:59.078485197Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:27:59.078591 containerd[1472]: time="2025-01-13T21:27:59.078559626Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:27:59.078591 containerd[1472]: time="2025-01-13T21:27:59.078571889Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:27:59.079165 containerd[1472]: time="2025-01-13T21:27:59.079022374Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:27:59.079165 containerd[1472]: time="2025-01-13T21:27:59.079095381Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:27:59.079165 containerd[1472]: time="2025-01-13T21:27:59.079108145Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:27:59.081071 containerd[1472]: time="2025-01-13T21:27:59.080067534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:3,}" Jan 13 21:27:59.084100 kubelet[1863]: I0113 21:27:59.084082 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a" Jan 13 21:27:59.084812 containerd[1472]: time="2025-01-13T21:27:59.084786605Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:27:59.085541 containerd[1472]: time="2025-01-13T21:27:59.085520131Z" level=info msg="Ensure that sandbox c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a in task-service has been cleanup successfully" Jan 13 21:27:59.085803 containerd[1472]: time="2025-01-13T21:27:59.085786260Z" level=info msg="TearDown network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" successfully" Jan 13 21:27:59.085871 containerd[1472]: time="2025-01-13T21:27:59.085856381Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" returns successfully" Jan 13 21:27:59.087714 systemd[1]: run-netns-cni\x2dd1431525\x2dc053\x2d7a83\x2d9882\x2d47859d86ce03.mount: Deactivated successfully. Jan 13 21:27:59.090935 containerd[1472]: time="2025-01-13T21:27:59.090183747Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:27:59.090935 containerd[1472]: time="2025-01-13T21:27:59.090332997Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:27:59.090935 containerd[1472]: time="2025-01-13T21:27:59.090347985Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:27:59.092699 containerd[1472]: time="2025-01-13T21:27:59.092648370Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:27:59.092769 containerd[1472]: time="2025-01-13T21:27:59.092747025Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:27:59.092769 containerd[1472]: time="2025-01-13T21:27:59.092763826Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:27:59.093854 containerd[1472]: time="2025-01-13T21:27:59.093596438Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:27:59.093854 containerd[1472]: time="2025-01-13T21:27:59.093704040Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:27:59.093854 containerd[1472]: time="2025-01-13T21:27:59.093737843Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:27:59.096614 containerd[1472]: time="2025-01-13T21:27:59.096580314Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:27:59.096770 containerd[1472]: time="2025-01-13T21:27:59.096752738Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:27:59.096840 containerd[1472]: time="2025-01-13T21:27:59.096823089Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:27:59.097359 containerd[1472]: time="2025-01-13T21:27:59.097234691Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:27:59.097359 containerd[1472]: time="2025-01-13T21:27:59.097304482Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:27:59.098355 containerd[1472]: time="2025-01-13T21:27:59.098260024Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:27:59.100548 containerd[1472]: time="2025-01-13T21:27:59.100239136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:6,}" Jan 13 21:27:59.220972 containerd[1472]: time="2025-01-13T21:27:59.220914293Z" level=error msg="Failed to destroy network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.221549 containerd[1472]: time="2025-01-13T21:27:59.221522844Z" level=error msg="encountered an error cleaning up failed sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.222284 containerd[1472]: time="2025-01-13T21:27:59.222184996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.222973 kubelet[1863]: E0113 21:27:59.222497 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.222973 kubelet[1863]: E0113 21:27:59.222557 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:59.222973 kubelet[1863]: E0113 21:27:59.222580 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:27:59.223174 kubelet[1863]: E0113 21:27:59.222627 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-vt9gg" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" Jan 13 21:27:59.234947 containerd[1472]: time="2025-01-13T21:27:59.234891306Z" level=error msg="Failed to destroy network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.235664 containerd[1472]: time="2025-01-13T21:27:59.235482966Z" level=error msg="encountered an error cleaning up failed sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.235664 containerd[1472]: time="2025-01-13T21:27:59.235538039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.236169 kubelet[1863]: E0113 21:27:59.235930 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:27:59.236169 kubelet[1863]: E0113 21:27:59.236055 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:59.236169 kubelet[1863]: E0113 21:27:59.236081 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:27:59.236559 kubelet[1863]: E0113 21:27:59.236269 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:27:59.818901 kubelet[1863]: E0113 21:27:59.818846 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:00.018792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca-shm.mount: Deactivated successfully. Jan 13 21:28:00.091832 kubelet[1863]: I0113 21:28:00.091680 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f" Jan 13 21:28:00.092717 containerd[1472]: time="2025-01-13T21:28:00.092497436Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" Jan 13 21:28:00.094415 containerd[1472]: time="2025-01-13T21:28:00.092702361Z" level=info msg="Ensure that sandbox db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f in task-service has been cleanup successfully" Jan 13 21:28:00.094727 containerd[1472]: time="2025-01-13T21:28:00.094540328Z" level=info msg="TearDown network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" successfully" Jan 13 21:28:00.094727 containerd[1472]: time="2025-01-13T21:28:00.094559614Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" returns successfully" Jan 13 21:28:00.095703 containerd[1472]: time="2025-01-13T21:28:00.094921613Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:28:00.095703 containerd[1472]: time="2025-01-13T21:28:00.094997155Z" level=info msg="TearDown network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" successfully" Jan 13 21:28:00.095703 containerd[1472]: time="2025-01-13T21:28:00.095009798Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" returns successfully" Jan 13 21:28:00.095998 containerd[1472]: time="2025-01-13T21:28:00.095962866Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:28:00.096082 containerd[1472]: time="2025-01-13T21:28:00.096058636Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:28:00.096082 containerd[1472]: time="2025-01-13T21:28:00.096077250Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:28:00.096336 systemd[1]: run-netns-cni\x2db2216502\x2d203b\x2d37de\x2d19cb\x2d86e01a9cbde1.mount: Deactivated successfully. Jan 13 21:28:00.097114 containerd[1472]: time="2025-01-13T21:28:00.096400256Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:28:00.097114 containerd[1472]: time="2025-01-13T21:28:00.096463725Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:28:00.097114 containerd[1472]: time="2025-01-13T21:28:00.096474445Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:28:00.097374 containerd[1472]: time="2025-01-13T21:28:00.097266280Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:28:00.097374 containerd[1472]: time="2025-01-13T21:28:00.097332254Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:28:00.097374 containerd[1472]: time="2025-01-13T21:28:00.097342954Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:28:00.098589 containerd[1472]: time="2025-01-13T21:28:00.098551942Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:28:00.098647 containerd[1472]: time="2025-01-13T21:28:00.098626181Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:28:00.098647 containerd[1472]: time="2025-01-13T21:28:00.098643283Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:28:00.099501 containerd[1472]: time="2025-01-13T21:28:00.099467819Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:28:00.099549 containerd[1472]: time="2025-01-13T21:28:00.099538672Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:28:00.099579 containerd[1472]: time="2025-01-13T21:28:00.099549863Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:28:00.099804 kubelet[1863]: I0113 21:28:00.099775 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca" Jan 13 21:28:00.100266 containerd[1472]: time="2025-01-13T21:28:00.100234938Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" Jan 13 21:28:00.100428 containerd[1472]: time="2025-01-13T21:28:00.100400699Z" level=info msg="Ensure that sandbox f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca in task-service has been cleanup successfully" Jan 13 21:28:00.101902 systemd[1]: run-netns-cni\x2d5c15614f\x2d01d1\x2d4cd1\x2df8c6\x2d52e69cf763f8.mount: Deactivated successfully. Jan 13 21:28:00.102809 containerd[1472]: time="2025-01-13T21:28:00.102777757Z" level=info msg="TearDown network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" successfully" Jan 13 21:28:00.102809 containerd[1472]: time="2025-01-13T21:28:00.102801973Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" returns successfully" Jan 13 21:28:00.102926 containerd[1472]: time="2025-01-13T21:28:00.102900167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:7,}" Jan 13 21:28:00.108505 containerd[1472]: time="2025-01-13T21:28:00.108481234Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:28:00.108598 containerd[1472]: time="2025-01-13T21:28:00.108554381Z" level=info msg="TearDown network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" successfully" Jan 13 21:28:00.108598 containerd[1472]: time="2025-01-13T21:28:00.108569830Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" returns successfully" Jan 13 21:28:00.108942 containerd[1472]: time="2025-01-13T21:28:00.108828355Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:28:00.108942 containerd[1472]: time="2025-01-13T21:28:00.108894750Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:28:00.108942 containerd[1472]: time="2025-01-13T21:28:00.108905740Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:28:00.115336 containerd[1472]: time="2025-01-13T21:28:00.115307016Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:28:00.115477 containerd[1472]: time="2025-01-13T21:28:00.115391905Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:28:00.115477 containerd[1472]: time="2025-01-13T21:28:00.115408887Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:28:00.115853 containerd[1472]: time="2025-01-13T21:28:00.115823655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:4,}" Jan 13 21:28:00.522542 containerd[1472]: time="2025-01-13T21:28:00.522418042Z" level=error msg="Failed to destroy network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.523340 containerd[1472]: time="2025-01-13T21:28:00.523178188Z" level=error msg="encountered an error cleaning up failed sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.523340 containerd[1472]: time="2025-01-13T21:28:00.523251335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.523556 kubelet[1863]: E0113 21:28:00.523470 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.523556 kubelet[1863]: E0113 21:28:00.523543 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:28:00.523641 kubelet[1863]: E0113 21:28:00.523566 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:28:00.523641 kubelet[1863]: E0113 21:28:00.523622 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-vt9gg" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" Jan 13 21:28:00.527883 containerd[1472]: time="2025-01-13T21:28:00.527830013Z" level=error msg="Failed to destroy network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.529153 containerd[1472]: time="2025-01-13T21:28:00.529094123Z" level=error msg="encountered an error cleaning up failed sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.529207 containerd[1472]: time="2025-01-13T21:28:00.529172170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.529342 kubelet[1863]: E0113 21:28:00.529307 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:00.529377 kubelet[1863]: E0113 21:28:00.529351 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:28:00.529403 kubelet[1863]: E0113 21:28:00.529371 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:28:00.529431 kubelet[1863]: E0113 21:28:00.529404 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:28:00.819404 kubelet[1863]: E0113 21:28:00.819272 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:01.018951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec-shm.mount: Deactivated successfully. Jan 13 21:28:01.019277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e-shm.mount: Deactivated successfully. Jan 13 21:28:01.105155 kubelet[1863]: I0113 21:28:01.104963 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec" Jan 13 21:28:01.106914 containerd[1472]: time="2025-01-13T21:28:01.106773301Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\"" Jan 13 21:28:01.107517 containerd[1472]: time="2025-01-13T21:28:01.106968647Z" level=info msg="Ensure that sandbox 471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec in task-service has been cleanup successfully" Jan 13 21:28:01.107517 containerd[1472]: time="2025-01-13T21:28:01.107265794Z" level=info msg="TearDown network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" successfully" Jan 13 21:28:01.107517 containerd[1472]: time="2025-01-13T21:28:01.107280933Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" returns successfully" Jan 13 21:28:01.109234 containerd[1472]: time="2025-01-13T21:28:01.109204480Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" Jan 13 21:28:01.109295 containerd[1472]: time="2025-01-13T21:28:01.109274582Z" level=info msg="TearDown network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" successfully" Jan 13 21:28:01.109295 containerd[1472]: time="2025-01-13T21:28:01.109290542Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" returns successfully" Jan 13 21:28:01.109839 containerd[1472]: time="2025-01-13T21:28:01.109596375Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:28:01.109839 containerd[1472]: time="2025-01-13T21:28:01.109654564Z" level=info msg="TearDown network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" successfully" Jan 13 21:28:01.109839 containerd[1472]: time="2025-01-13T21:28:01.109666657Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" returns successfully" Jan 13 21:28:01.110198 systemd[1]: run-netns-cni\x2dce0e3198\x2d13e4\x2d9709\x2dd959\x2dc580713c8840.mount: Deactivated successfully. Jan 13 21:28:01.112479 containerd[1472]: time="2025-01-13T21:28:01.110387639Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:28:01.112479 containerd[1472]: time="2025-01-13T21:28:01.110443574Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:28:01.112479 containerd[1472]: time="2025-01-13T21:28:01.110455947Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:28:01.112479 containerd[1472]: time="2025-01-13T21:28:01.110912313Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:28:01.112479 containerd[1472]: time="2025-01-13T21:28:01.110991332Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:28:01.112479 containerd[1472]: time="2025-01-13T21:28:01.111004296Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:28:01.112479 containerd[1472]: time="2025-01-13T21:28:01.111780171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:5,}" Jan 13 21:28:01.116935 kubelet[1863]: I0113 21:28:01.116389 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e" Jan 13 21:28:01.117236 containerd[1472]: time="2025-01-13T21:28:01.117154711Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\"" Jan 13 21:28:01.117764 containerd[1472]: time="2025-01-13T21:28:01.117743776Z" level=info msg="Ensure that sandbox c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e in task-service has been cleanup successfully" Jan 13 21:28:01.118112 containerd[1472]: time="2025-01-13T21:28:01.118083323Z" level=info msg="TearDown network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" successfully" Jan 13 21:28:01.118274 containerd[1472]: time="2025-01-13T21:28:01.118217013Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" returns successfully" Jan 13 21:28:01.119914 systemd[1]: run-netns-cni\x2d6c9a25ed\x2d0d04\x2d02ed\x2d6396\x2de5cfa9cd9b25.mount: Deactivated successfully. Jan 13 21:28:01.122805 containerd[1472]: time="2025-01-13T21:28:01.122551953Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" Jan 13 21:28:01.122805 containerd[1472]: time="2025-01-13T21:28:01.122655027Z" level=info msg="TearDown network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" successfully" Jan 13 21:28:01.122805 containerd[1472]: time="2025-01-13T21:28:01.122678351Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" returns successfully" Jan 13 21:28:01.127196 containerd[1472]: time="2025-01-13T21:28:01.126689654Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:28:01.127196 containerd[1472]: time="2025-01-13T21:28:01.126920647Z" level=info msg="TearDown network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" successfully" Jan 13 21:28:01.127196 containerd[1472]: time="2025-01-13T21:28:01.126935946Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" returns successfully" Jan 13 21:28:01.128515 containerd[1472]: time="2025-01-13T21:28:01.128475433Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:28:01.129488 containerd[1472]: time="2025-01-13T21:28:01.128590288Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:28:01.129488 containerd[1472]: time="2025-01-13T21:28:01.128613822Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:28:01.131320 containerd[1472]: time="2025-01-13T21:28:01.131282788Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:28:01.131720 containerd[1472]: time="2025-01-13T21:28:01.131533869Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:28:01.131762 containerd[1472]: time="2025-01-13T21:28:01.131716411Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:28:01.132049 containerd[1472]: time="2025-01-13T21:28:01.132025180Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:28:01.132203 containerd[1472]: time="2025-01-13T21:28:01.132113436Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:28:01.132203 containerd[1472]: time="2025-01-13T21:28:01.132161386Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:28:01.134907 containerd[1472]: time="2025-01-13T21:28:01.134076778Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:28:01.134907 containerd[1472]: time="2025-01-13T21:28:01.134173239Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:28:01.134907 containerd[1472]: time="2025-01-13T21:28:01.134215539Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:28:01.134907 containerd[1472]: time="2025-01-13T21:28:01.134502316Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:28:01.134907 containerd[1472]: time="2025-01-13T21:28:01.134570144Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:28:01.134907 containerd[1472]: time="2025-01-13T21:28:01.134582056Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:28:01.135666 containerd[1472]: time="2025-01-13T21:28:01.135638337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:8,}" Jan 13 21:28:01.243701 containerd[1472]: time="2025-01-13T21:28:01.243648670Z" level=error msg="Failed to destroy network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.245464 containerd[1472]: time="2025-01-13T21:28:01.245425413Z" level=error msg="encountered an error cleaning up failed sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.245595 containerd[1472]: time="2025-01-13T21:28:01.245572128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.246225 kubelet[1863]: E0113 21:28:01.246134 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.246225 kubelet[1863]: E0113 21:28:01.246195 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:28:01.246437 kubelet[1863]: E0113 21:28:01.246219 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:28:01.246437 kubelet[1863]: E0113 21:28:01.246273 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-vt9gg" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" Jan 13 21:28:01.260511 containerd[1472]: time="2025-01-13T21:28:01.260467113Z" level=error msg="Failed to destroy network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.260790 containerd[1472]: time="2025-01-13T21:28:01.260759091Z" level=error msg="encountered an error cleaning up failed sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.260939 containerd[1472]: time="2025-01-13T21:28:01.260819865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.261045 kubelet[1863]: E0113 21:28:01.261009 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:01.261151 kubelet[1863]: E0113 21:28:01.261067 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:28:01.261151 kubelet[1863]: E0113 21:28:01.261096 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:28:01.261384 kubelet[1863]: E0113 21:28:01.261164 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:28:01.820044 kubelet[1863]: E0113 21:28:01.820004 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:02.018881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e-shm.mount: Deactivated successfully. Jan 13 21:28:02.117868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663893776.mount: Deactivated successfully. Jan 13 21:28:02.123174 kubelet[1863]: I0113 21:28:02.123105 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98" Jan 13 21:28:02.124101 containerd[1472]: time="2025-01-13T21:28:02.124063829Z" level=info msg="StopPodSandbox for \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\"" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.124283541Z" level=info msg="Ensure that sandbox 6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98 in task-service has been cleanup successfully" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.124429194Z" level=info msg="TearDown network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" successfully" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.124444693Z" level=info msg="StopPodSandbox for \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" returns successfully" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.124791904Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\"" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.124853339Z" level=info msg="TearDown network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" successfully" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.124864330Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" returns successfully" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.126888466Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.126957786Z" level=info msg="TearDown network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" successfully" Jan 13 21:28:02.127155 containerd[1472]: time="2025-01-13T21:28:02.126968737Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" returns successfully" Jan 13 21:28:02.126267 systemd[1]: run-netns-cni\x2dcfee7fec\x2dc5d5\x2da42f\x2ddd57\x2d09b5f40c33c0.mount: Deactivated successfully. Jan 13 21:28:02.127435 containerd[1472]: time="2025-01-13T21:28:02.127331066Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:28:02.127463 containerd[1472]: time="2025-01-13T21:28:02.127398844Z" level=info msg="TearDown network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" successfully" Jan 13 21:28:02.127463 containerd[1472]: time="2025-01-13T21:28:02.127442215Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" returns successfully" Jan 13 21:28:02.129094 containerd[1472]: time="2025-01-13T21:28:02.129066361Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:28:02.129472 containerd[1472]: time="2025-01-13T21:28:02.129445662Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:28:02.129514 containerd[1472]: time="2025-01-13T21:28:02.129482431Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:28:02.129889 containerd[1472]: time="2025-01-13T21:28:02.129861532Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:28:02.129970 containerd[1472]: time="2025-01-13T21:28:02.129946031Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:28:02.129970 containerd[1472]: time="2025-01-13T21:28:02.129962642Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:28:02.130554 containerd[1472]: time="2025-01-13T21:28:02.130527571Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:28:02.130629 containerd[1472]: time="2025-01-13T21:28:02.130607121Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:28:02.130629 containerd[1472]: time="2025-01-13T21:28:02.130622580Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:28:02.130787 kubelet[1863]: I0113 21:28:02.130762 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e" Jan 13 21:28:02.131185 containerd[1472]: time="2025-01-13T21:28:02.131159437Z" level=info msg="StopPodSandbox for \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\"" Jan 13 21:28:02.131430 containerd[1472]: time="2025-01-13T21:28:02.131380812Z" level=info msg="Ensure that sandbox 5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e in task-service has been cleanup successfully" Jan 13 21:28:02.132681 containerd[1472]: time="2025-01-13T21:28:02.131913531Z" level=info msg="TearDown network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" successfully" Jan 13 21:28:02.133093 systemd[1]: run-netns-cni\x2d997aeb2b\x2d8942\x2da445\x2d17f5\x2da709eaf76e82.mount: Deactivated successfully. Jan 13 21:28:02.133191 containerd[1472]: time="2025-01-13T21:28:02.133157604Z" level=info msg="StopPodSandbox for \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" returns successfully" Jan 13 21:28:02.133277 containerd[1472]: time="2025-01-13T21:28:02.133253233Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:28:02.133337 containerd[1472]: time="2025-01-13T21:28:02.133317183Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:28:02.133337 containerd[1472]: time="2025-01-13T21:28:02.133332272Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:28:02.133915 containerd[1472]: time="2025-01-13T21:28:02.133888114Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\"" Jan 13 21:28:02.134159 containerd[1472]: time="2025-01-13T21:28:02.134095142Z" level=info msg="TearDown network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" successfully" Jan 13 21:28:02.135359 containerd[1472]: time="2025-01-13T21:28:02.135331201Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" returns successfully" Jan 13 21:28:02.135528 containerd[1472]: time="2025-01-13T21:28:02.135501460Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:28:02.135613 containerd[1472]: time="2025-01-13T21:28:02.135590216Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:28:02.135613 containerd[1472]: time="2025-01-13T21:28:02.135607779Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:28:02.136024 containerd[1472]: time="2025-01-13T21:28:02.135996539Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" Jan 13 21:28:02.136232 containerd[1472]: time="2025-01-13T21:28:02.136207344Z" level=info msg="TearDown network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" successfully" Jan 13 21:28:02.136279 containerd[1472]: time="2025-01-13T21:28:02.136226931Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" returns successfully" Jan 13 21:28:02.136633 containerd[1472]: time="2025-01-13T21:28:02.136606232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:9,}" Jan 13 21:28:02.137002 containerd[1472]: time="2025-01-13T21:28:02.136964774Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:28:02.137068 containerd[1472]: time="2025-01-13T21:28:02.137045155Z" level=info msg="TearDown network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" successfully" Jan 13 21:28:02.137068 containerd[1472]: time="2025-01-13T21:28:02.137061095Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" returns successfully" Jan 13 21:28:02.137339 containerd[1472]: time="2025-01-13T21:28:02.137311835Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:28:02.137419 containerd[1472]: time="2025-01-13T21:28:02.137397426Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:28:02.137419 containerd[1472]: time="2025-01-13T21:28:02.137413356Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:28:02.137790 containerd[1472]: time="2025-01-13T21:28:02.137758353Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:28:02.137869 containerd[1472]: time="2025-01-13T21:28:02.137846919Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:28:02.137869 containerd[1472]: time="2025-01-13T21:28:02.137863250Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:28:02.138250 containerd[1472]: time="2025-01-13T21:28:02.138223044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:6,}" Jan 13 21:28:02.178449 containerd[1472]: time="2025-01-13T21:28:02.178385699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:02.186639 containerd[1472]: time="2025-01-13T21:28:02.186510307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:28:02.207602 containerd[1472]: time="2025-01-13T21:28:02.207439460Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:02.225654 containerd[1472]: time="2025-01-13T21:28:02.225324613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:02.230534 containerd[1472]: time="2025-01-13T21:28:02.230280398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.228583686s" Jan 13 21:28:02.230534 containerd[1472]: time="2025-01-13T21:28:02.230335621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:28:02.251443 containerd[1472]: time="2025-01-13T21:28:02.251197378Z" level=info msg="CreateContainer within sandbox \"9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:28:02.281800 containerd[1472]: time="2025-01-13T21:28:02.281659830Z" level=info msg="CreateContainer within sandbox \"9d48563ad00c93c0853826d28b7f7c62b04482c3c6c4fbbd8c6add655e8c1ee0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c4c4bcd5dd837fe43b372f1460a0f824e06569990aec830964a45d282091f7f1\"" Jan 13 21:28:02.282534 containerd[1472]: time="2025-01-13T21:28:02.282366475Z" level=info msg="StartContainer for \"c4c4bcd5dd837fe43b372f1460a0f824e06569990aec830964a45d282091f7f1\"" Jan 13 21:28:02.305549 containerd[1472]: time="2025-01-13T21:28:02.305480916Z" level=error msg="Failed to destroy network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.305943 containerd[1472]: time="2025-01-13T21:28:02.305907827Z" level=error msg="encountered an error cleaning up failed sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.306517 containerd[1472]: time="2025-01-13T21:28:02.306255209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.306573 kubelet[1863]: E0113 21:28:02.306486 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.306573 kubelet[1863]: E0113 21:28:02.306560 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:28:02.306639 kubelet[1863]: E0113 21:28:02.306585 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7zd4" Jan 13 21:28:02.306667 kubelet[1863]: E0113 21:28:02.306636 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7zd4_calico-system(bd59e032-03c6-4c4d-bd4a-80c72aff8c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7zd4" podUID="bd59e032-03c6-4c4d-bd4a-80c72aff8c72" Jan 13 21:28:02.308842 containerd[1472]: time="2025-01-13T21:28:02.308789161Z" level=error msg="Failed to destroy network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.309216 containerd[1472]: time="2025-01-13T21:28:02.309184302Z" level=error msg="encountered an error cleaning up failed sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.309288 containerd[1472]: time="2025-01-13T21:28:02.309237221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.309462 kubelet[1863]: E0113 21:28:02.309409 1863 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:28:02.309507 kubelet[1863]: E0113 21:28:02.309466 1863 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:28:02.309507 kubelet[1863]: E0113 21:28:02.309484 1863 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-vt9gg" Jan 13 21:28:02.309560 kubelet[1863]: E0113 21:28:02.309518 1863 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-vt9gg_default(0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-vt9gg" podUID="0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec" Jan 13 21:28:02.330277 systemd[1]: Started cri-containerd-c4c4bcd5dd837fe43b372f1460a0f824e06569990aec830964a45d282091f7f1.scope - libcontainer container c4c4bcd5dd837fe43b372f1460a0f824e06569990aec830964a45d282091f7f1. Jan 13 21:28:02.367151 containerd[1472]: time="2025-01-13T21:28:02.367077561Z" level=info msg="StartContainer for \"c4c4bcd5dd837fe43b372f1460a0f824e06569990aec830964a45d282091f7f1\" returns successfully" Jan 13 21:28:02.434593 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:28:02.434694 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:28:02.821057 kubelet[1863]: E0113 21:28:02.820773 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:03.030952 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a-shm.mount: Deactivated successfully. Jan 13 21:28:03.143541 kubelet[1863]: I0113 21:28:03.142609 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3" Jan 13 21:28:03.144578 containerd[1472]: time="2025-01-13T21:28:03.144096059Z" level=info msg="StopPodSandbox for \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\"" Jan 13 21:28:03.148528 containerd[1472]: time="2025-01-13T21:28:03.144581209Z" level=info msg="Ensure that sandbox 8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3 in task-service has been cleanup successfully" Jan 13 21:28:03.148528 containerd[1472]: time="2025-01-13T21:28:03.148196650Z" level=info msg="TearDown network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\" successfully" Jan 13 21:28:03.148528 containerd[1472]: time="2025-01-13T21:28:03.148290335Z" level=info msg="StopPodSandbox for \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\" returns successfully" Jan 13 21:28:03.151240 containerd[1472]: time="2025-01-13T21:28:03.149336768Z" level=info msg="StopPodSandbox for \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\"" Jan 13 21:28:03.151240 containerd[1472]: time="2025-01-13T21:28:03.149528137Z" level=info msg="TearDown network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" successfully" Jan 13 21:28:03.151240 containerd[1472]: time="2025-01-13T21:28:03.149559165Z" level=info msg="StopPodSandbox for \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" returns successfully" Jan 13 21:28:03.153087 systemd[1]: run-netns-cni\x2d1fa3bed0\x2d7ecc\x2d0248\x2ddd66\x2d16d46801bdbc.mount: Deactivated successfully. Jan 13 21:28:03.155994 containerd[1472]: time="2025-01-13T21:28:03.155599924Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\"" Jan 13 21:28:03.155994 containerd[1472]: time="2025-01-13T21:28:03.155789801Z" level=info msg="TearDown network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" successfully" Jan 13 21:28:03.155994 containerd[1472]: time="2025-01-13T21:28:03.155824686Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" returns successfully" Jan 13 21:28:03.161177 containerd[1472]: time="2025-01-13T21:28:03.159023926Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" Jan 13 21:28:03.161177 containerd[1472]: time="2025-01-13T21:28:03.159225925Z" level=info msg="TearDown network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" successfully" Jan 13 21:28:03.161177 containerd[1472]: time="2025-01-13T21:28:03.159255721Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" returns successfully" Jan 13 21:28:03.161904 containerd[1472]: time="2025-01-13T21:28:03.161845488Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:28:03.162106 containerd[1472]: time="2025-01-13T21:28:03.162014004Z" level=info msg="TearDown network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" successfully" Jan 13 21:28:03.162106 containerd[1472]: time="2025-01-13T21:28:03.162083234Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" returns successfully" Jan 13 21:28:03.164233 containerd[1472]: time="2025-01-13T21:28:03.164177432Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:28:03.165413 containerd[1472]: time="2025-01-13T21:28:03.165334391Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:28:03.165717 containerd[1472]: time="2025-01-13T21:28:03.165646727Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:28:03.168050 containerd[1472]: time="2025-01-13T21:28:03.167939247Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:28:03.168637 containerd[1472]: time="2025-01-13T21:28:03.168583846Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:28:03.168637 containerd[1472]: time="2025-01-13T21:28:03.168633008Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:28:03.172189 containerd[1472]: time="2025-01-13T21:28:03.170923754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:7,}" Jan 13 21:28:03.178145 kubelet[1863]: I0113 21:28:03.178008 1863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a" Jan 13 21:28:03.179530 containerd[1472]: time="2025-01-13T21:28:03.179476906Z" level=info msg="StopPodSandbox for \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\"" Jan 13 21:28:03.180385 containerd[1472]: time="2025-01-13T21:28:03.180337379Z" level=info msg="Ensure that sandbox 85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a in task-service has been cleanup successfully" Jan 13 21:28:03.182488 containerd[1472]: time="2025-01-13T21:28:03.182377416Z" level=info msg="TearDown network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\" successfully" Jan 13 21:28:03.182634 containerd[1472]: time="2025-01-13T21:28:03.182560960Z" level=info msg="StopPodSandbox for \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\" returns successfully" Jan 13 21:28:03.187848 systemd[1]: run-netns-cni\x2d9c49e917\x2d1f26\x2d50d9\x2d4db6\x2d15242d70dbac.mount: Deactivated successfully. Jan 13 21:28:03.191172 containerd[1472]: time="2025-01-13T21:28:03.190371649Z" level=info msg="StopPodSandbox for \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\"" Jan 13 21:28:03.191172 containerd[1472]: time="2025-01-13T21:28:03.190569891Z" level=info msg="TearDown network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" successfully" Jan 13 21:28:03.191172 containerd[1472]: time="2025-01-13T21:28:03.190599747Z" level=info msg="StopPodSandbox for \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" returns successfully" Jan 13 21:28:03.194491 containerd[1472]: time="2025-01-13T21:28:03.193939470Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\"" Jan 13 21:28:03.194916 containerd[1472]: time="2025-01-13T21:28:03.194793532Z" level=info msg="TearDown network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" successfully" Jan 13 21:28:03.194916 containerd[1472]: time="2025-01-13T21:28:03.194876908Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" returns successfully" Jan 13 21:28:03.196596 containerd[1472]: time="2025-01-13T21:28:03.196526202Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" Jan 13 21:28:03.196725 containerd[1472]: time="2025-01-13T21:28:03.196685260Z" level=info msg="TearDown network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" successfully" Jan 13 21:28:03.196800 containerd[1472]: time="2025-01-13T21:28:03.196718923Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" returns successfully" Jan 13 21:28:03.198114 containerd[1472]: time="2025-01-13T21:28:03.198016386Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:28:03.198956 containerd[1472]: time="2025-01-13T21:28:03.198321739Z" level=info msg="TearDown network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" successfully" Jan 13 21:28:03.198956 containerd[1472]: time="2025-01-13T21:28:03.198401308Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" returns successfully" Jan 13 21:28:03.200159 containerd[1472]: time="2025-01-13T21:28:03.199112412Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:28:03.200159 containerd[1472]: time="2025-01-13T21:28:03.199685256Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:28:03.200159 containerd[1472]: time="2025-01-13T21:28:03.199771428Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:28:03.200689 containerd[1472]: time="2025-01-13T21:28:03.200621833Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:28:03.200851 containerd[1472]: time="2025-01-13T21:28:03.200800408Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:28:03.200851 containerd[1472]: time="2025-01-13T21:28:03.200840944Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:28:03.202361 containerd[1472]: time="2025-01-13T21:28:03.201892095Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:28:03.202361 containerd[1472]: time="2025-01-13T21:28:03.202066683Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:28:03.202361 containerd[1472]: time="2025-01-13T21:28:03.202094956Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:28:03.203507 containerd[1472]: time="2025-01-13T21:28:03.203403440Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:28:03.203742 containerd[1472]: time="2025-01-13T21:28:03.203684317Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:28:03.203857 containerd[1472]: time="2025-01-13T21:28:03.203723400Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:28:03.204548 containerd[1472]: time="2025-01-13T21:28:03.204332593Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:28:03.204548 containerd[1472]: time="2025-01-13T21:28:03.204515215Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:28:03.204548 containerd[1472]: time="2025-01-13T21:28:03.204541534Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:28:03.209386 containerd[1472]: time="2025-01-13T21:28:03.208659518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:10,}" Jan 13 21:28:03.249484 kubelet[1863]: I0113 21:28:03.249359 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6vggm" podStartSLOduration=5.72156623 podStartE2EDuration="27.24932176s" podCreationTimestamp="2025-01-13 21:27:36 +0000 UTC" firstStartedPulling="2025-01-13 21:27:40.706705589 +0000 UTC m=+4.417683886" lastFinishedPulling="2025-01-13 21:28:02.234461089 +0000 UTC m=+25.945439416" observedRunningTime="2025-01-13 21:28:03.246228077 +0000 UTC m=+26.957206385" watchObservedRunningTime="2025-01-13 21:28:03.24932176 +0000 UTC m=+26.960300057" Jan 13 21:28:03.686747 systemd-networkd[1378]: cali0dfd10a1e35: Link UP Jan 13 21:28:03.688284 systemd-networkd[1378]: cali0dfd10a1e35: Gained carrier Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.389 [INFO][2943] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.473 [INFO][2943] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0 nginx-deployment-85f456d6dd- default 0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec 1207 0 2025-01-13 21:27:56 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.197 nginx-deployment-85f456d6dd-vt9gg eth0 default [] [] [kns.default ksa.default.default] cali0dfd10a1e35 [] []}} ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.473 [INFO][2943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.552 [INFO][2955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" HandleID="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Workload="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.577 [INFO][2955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" HandleID="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Workload="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381210), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.197", "pod":"nginx-deployment-85f456d6dd-vt9gg", "timestamp":"2025-01-13 21:28:03.55260657 +0000 UTC"}, Hostname:"172.24.4.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.577 [INFO][2955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.577 [INFO][2955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.577 [INFO][2955] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.197' Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.581 [INFO][2955] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.589 [INFO][2955] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.598 [INFO][2955] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.602 [INFO][2955] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.607 [INFO][2955] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.607 [INFO][2955] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.610 [INFO][2955] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9 Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.643 [INFO][2955] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.651 [INFO][2955] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.65/26] block=192.168.83.64/26 handle="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.651 [INFO][2955] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.65/26] handle="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" host="172.24.4.197" Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.652 [INFO][2955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:03.720258 containerd[1472]: 2025-01-13 21:28:03.652 [INFO][2955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.65/26] IPv6=[] ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" HandleID="k8s-pod-network.66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Workload="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" Jan 13 21:28:03.721327 containerd[1472]: 2025-01-13 21:28:03.657 [INFO][2943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-vt9gg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0dfd10a1e35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:03.721327 containerd[1472]: 2025-01-13 21:28:03.658 [INFO][2943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.65/32] ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" Jan 13 21:28:03.721327 containerd[1472]: 2025-01-13 21:28:03.658 [INFO][2943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dfd10a1e35 ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" Jan 13 21:28:03.721327 containerd[1472]: 2025-01-13 21:28:03.689 [INFO][2943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" Jan 13 21:28:03.721327 containerd[1472]: 2025-01-13 21:28:03.690 [INFO][2943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9", Pod:"nginx-deployment-85f456d6dd-vt9gg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0dfd10a1e35", MAC:"62:27:43:2d:62:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:03.721327 containerd[1472]: 2025-01-13 21:28:03.716 [INFO][2943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9" Namespace="default" Pod="nginx-deployment-85f456d6dd-vt9gg" WorkloadEndpoint="172.24.4.197-k8s-nginx--deployment--85f456d6dd--vt9gg-eth0" Jan 13 21:28:03.768503 containerd[1472]: time="2025-01-13T21:28:03.767441142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:03.768503 containerd[1472]: time="2025-01-13T21:28:03.767561679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:03.768503 containerd[1472]: time="2025-01-13T21:28:03.767608406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:03.768503 containerd[1472]: time="2025-01-13T21:28:03.767792582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:03.794350 systemd[1]: Started cri-containerd-66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9.scope - libcontainer container 66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9. Jan 13 21:28:03.821637 kubelet[1863]: E0113 21:28:03.821573 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:03.832691 containerd[1472]: time="2025-01-13T21:28:03.832344008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vt9gg,Uid:0cb30184-6c95-43d3-a7ac-5e7dfdbc75ec,Namespace:default,Attempt:7,} returns sandbox id \"66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9\"" Jan 13 21:28:03.835013 containerd[1472]: time="2025-01-13T21:28:03.834991453Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:28:03.845691 systemd-networkd[1378]: caliddc140486fd: Link UP Jan 13 21:28:03.845961 systemd-networkd[1378]: caliddc140486fd: Gained carrier Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.582 [INFO][2960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.608 [INFO][2960] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.197-k8s-csi--node--driver--d7zd4-eth0 csi-node-driver- calico-system bd59e032-03c6-4c4d-bd4a-80c72aff8c72 1084 0 2025-01-13 21:27:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.197 csi-node-driver-d7zd4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliddc140486fd [] []}} ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.608 [INFO][2960] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.734 [INFO][2974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" HandleID="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Workload="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.756 [INFO][2974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" HandleID="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Workload="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ecc50), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.197", "pod":"csi-node-driver-d7zd4", "timestamp":"2025-01-13 21:28:03.734075355 +0000 UTC"}, Hostname:"172.24.4.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.756 [INFO][2974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.756 [INFO][2974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.757 [INFO][2974] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.197' Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.761 [INFO][2974] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.765 [INFO][2974] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.773 [INFO][2974] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.775 [INFO][2974] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.778 [INFO][2974] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.778 [INFO][2974] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.781 [INFO][2974] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.810 [INFO][2974] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.834 [INFO][2974] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.66/26] block=192.168.83.64/26 handle="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.834 [INFO][2974] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.66/26] handle="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" host="172.24.4.197" Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.834 [INFO][2974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:03.930266 containerd[1472]: 2025-01-13 21:28:03.834 [INFO][2974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.66/26] IPv6=[] ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" HandleID="k8s-pod-network.a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Workload="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" Jan 13 21:28:03.930903 containerd[1472]: 2025-01-13 21:28:03.838 [INFO][2960] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-csi--node--driver--d7zd4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd59e032-03c6-4c4d-bd4a-80c72aff8c72", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"", Pod:"csi-node-driver-d7zd4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddc140486fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:03.930903 containerd[1472]: 2025-01-13 21:28:03.838 [INFO][2960] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.66/32] ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" Jan 13 21:28:03.930903 containerd[1472]: 2025-01-13 21:28:03.838 [INFO][2960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddc140486fd ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" Jan 13 21:28:03.930903 containerd[1472]: 2025-01-13 21:28:03.848 [INFO][2960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" Jan 13 21:28:03.930903 containerd[1472]: 2025-01-13 21:28:03.849 [INFO][2960] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-csi--node--driver--d7zd4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd59e032-03c6-4c4d-bd4a-80c72aff8c72", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 27, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca", Pod:"csi-node-driver-d7zd4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddc140486fd", MAC:"f2:09:91:17:46:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:03.930903 containerd[1472]: 2025-01-13 21:28:03.927 [INFO][2960] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca" Namespace="calico-system" Pod="csi-node-driver-d7zd4" WorkloadEndpoint="172.24.4.197-k8s-csi--node--driver--d7zd4-eth0" Jan 13 21:28:03.982738 containerd[1472]: time="2025-01-13T21:28:03.982191827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:03.982738 containerd[1472]: time="2025-01-13T21:28:03.982266838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:03.984566 containerd[1472]: time="2025-01-13T21:28:03.982313676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:03.984566 containerd[1472]: time="2025-01-13T21:28:03.982600694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:04.019791 systemd[1]: Started cri-containerd-a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca.scope - libcontainer container a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca. Jan 13 21:28:04.072345 containerd[1472]: time="2025-01-13T21:28:04.071942024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7zd4,Uid:bd59e032-03c6-4c4d-bd4a-80c72aff8c72,Namespace:calico-system,Attempt:10,} returns sandbox id \"a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca\"" Jan 13 21:28:04.114219 kernel: bpftool[3182]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:28:04.497310 systemd-networkd[1378]: vxlan.calico: Link UP Jan 13 21:28:04.497328 systemd-networkd[1378]: vxlan.calico: Gained carrier Jan 13 21:28:04.822428 kubelet[1863]: E0113 21:28:04.822342 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:04.933352 systemd-networkd[1378]: caliddc140486fd: Gained IPv6LL Jan 13 21:28:05.380536 systemd-networkd[1378]: cali0dfd10a1e35: Gained IPv6LL Jan 13 21:28:05.823391 kubelet[1863]: E0113 21:28:05.823307 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:05.957292 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Jan 13 21:28:06.823859 kubelet[1863]: E0113 21:28:06.823694 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:07.824728 kubelet[1863]: E0113 21:28:07.824692 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:08.640802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258412438.mount: Deactivated successfully. Jan 13 21:28:08.824814 kubelet[1863]: E0113 21:28:08.824782 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:09.826303 kubelet[1863]: E0113 21:28:09.826255 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:09.942406 containerd[1472]: time="2025-01-13T21:28:09.942303012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:09.944797 containerd[1472]: time="2025-01-13T21:28:09.944705471Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:28:09.945863 containerd[1472]: time="2025-01-13T21:28:09.945736080Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:09.952372 containerd[1472]: time="2025-01-13T21:28:09.952273441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:09.954928 containerd[1472]: time="2025-01-13T21:28:09.954869758Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 6.119478144s" Jan 13 21:28:09.955295 containerd[1472]: time="2025-01-13T21:28:09.955066412Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:28:09.957495 containerd[1472]: time="2025-01-13T21:28:09.957444644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:28:09.960378 containerd[1472]: time="2025-01-13T21:28:09.960050038Z" level=info msg="CreateContainer within sandbox \"66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:28:09.989376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026674995.mount: Deactivated successfully. Jan 13 21:28:09.992299 containerd[1472]: time="2025-01-13T21:28:09.992221823Z" level=info msg="CreateContainer within sandbox \"66723d9c061212d7ffb03c9a35a6c65d9fa15bc3455d19fb0e99b9c23b1f2ca9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9a08862a1b728f45cc3f8ce4701c75dd65bfa1e902036d7276c7d70f775dcf50\"" Jan 13 21:28:09.994252 containerd[1472]: time="2025-01-13T21:28:09.993057612Z" level=info msg="StartContainer for \"9a08862a1b728f45cc3f8ce4701c75dd65bfa1e902036d7276c7d70f775dcf50\"" Jan 13 21:28:10.037274 systemd[1]: Started cri-containerd-9a08862a1b728f45cc3f8ce4701c75dd65bfa1e902036d7276c7d70f775dcf50.scope - libcontainer container 9a08862a1b728f45cc3f8ce4701c75dd65bfa1e902036d7276c7d70f775dcf50. Jan 13 21:28:10.064741 containerd[1472]: time="2025-01-13T21:28:10.064520776Z" level=info msg="StartContainer for \"9a08862a1b728f45cc3f8ce4701c75dd65bfa1e902036d7276c7d70f775dcf50\" returns successfully" Jan 13 21:28:10.826593 kubelet[1863]: E0113 21:28:10.826484 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:11.827206 kubelet[1863]: E0113 21:28:11.827148 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:11.829333 containerd[1472]: time="2025-01-13T21:28:11.828560780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:11.829626 containerd[1472]: time="2025-01-13T21:28:11.829586707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:28:11.830531 containerd[1472]: time="2025-01-13T21:28:11.830476126Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:11.834528 containerd[1472]: time="2025-01-13T21:28:11.834453777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:11.835344 containerd[1472]: time="2025-01-13T21:28:11.835315864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.877816034s" Jan 13 21:28:11.835430 containerd[1472]: time="2025-01-13T21:28:11.835413780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:28:11.837738 containerd[1472]: time="2025-01-13T21:28:11.837573579Z" level=info msg="CreateContainer within sandbox \"a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:28:11.862565 containerd[1472]: time="2025-01-13T21:28:11.862523473Z" level=info msg="CreateContainer within sandbox \"a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"93f8e963045f97a76b000d3f3c461deacd554f236996526dfc45ad9919862df7\"" Jan 13 21:28:11.863171 containerd[1472]: time="2025-01-13T21:28:11.863096191Z" level=info msg="StartContainer for \"93f8e963045f97a76b000d3f3c461deacd554f236996526dfc45ad9919862df7\"" Jan 13 21:28:11.906261 systemd[1]: Started cri-containerd-93f8e963045f97a76b000d3f3c461deacd554f236996526dfc45ad9919862df7.scope - libcontainer container 93f8e963045f97a76b000d3f3c461deacd554f236996526dfc45ad9919862df7. Jan 13 21:28:11.949823 containerd[1472]: time="2025-01-13T21:28:11.949614051Z" level=info msg="StartContainer for \"93f8e963045f97a76b000d3f3c461deacd554f236996526dfc45ad9919862df7\" returns successfully" Jan 13 21:28:11.951162 containerd[1472]: time="2025-01-13T21:28:11.950936422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:28:12.827831 kubelet[1863]: E0113 21:28:12.827742 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:13.827908 kubelet[1863]: E0113 21:28:13.827858 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:14.141223 containerd[1472]: time="2025-01-13T21:28:14.140999416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:14.142461 containerd[1472]: time="2025-01-13T21:28:14.142419757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:28:14.143719 containerd[1472]: time="2025-01-13T21:28:14.143655858Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:14.147463 containerd[1472]: time="2025-01-13T21:28:14.147402285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:14.148320 containerd[1472]: time="2025-01-13T21:28:14.148217900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.197252283s" Jan 13 21:28:14.148320 containerd[1472]: time="2025-01-13T21:28:14.148246444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:28:14.151161 containerd[1472]: time="2025-01-13T21:28:14.151081434Z" level=info msg="CreateContainer within sandbox \"a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:28:14.176772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957135790.mount: Deactivated successfully. Jan 13 21:28:14.188276 containerd[1472]: time="2025-01-13T21:28:14.188201421Z" level=info msg="CreateContainer within sandbox \"a6683f3dd40f507ce5d1172e0ca331e96bae59c1bf94ce2be506d054944696ca\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4065e6be4fc8d9920d64795ba7ccf37eb9417155091c20a4f308acdde9a0d5a1\"" Jan 13 21:28:14.188921 containerd[1472]: time="2025-01-13T21:28:14.188854499Z" level=info msg="StartContainer for \"4065e6be4fc8d9920d64795ba7ccf37eb9417155091c20a4f308acdde9a0d5a1\"" Jan 13 21:28:14.224186 systemd[1]: run-containerd-runc-k8s.io-4065e6be4fc8d9920d64795ba7ccf37eb9417155091c20a4f308acdde9a0d5a1-runc.WM6VHZ.mount: Deactivated successfully. Jan 13 21:28:14.232282 systemd[1]: Started cri-containerd-4065e6be4fc8d9920d64795ba7ccf37eb9417155091c20a4f308acdde9a0d5a1.scope - libcontainer container 4065e6be4fc8d9920d64795ba7ccf37eb9417155091c20a4f308acdde9a0d5a1. Jan 13 21:28:14.276623 containerd[1472]: time="2025-01-13T21:28:14.275509662Z" level=info msg="StartContainer for \"4065e6be4fc8d9920d64795ba7ccf37eb9417155091c20a4f308acdde9a0d5a1\" returns successfully" Jan 13 21:28:14.828442 kubelet[1863]: E0113 21:28:14.828339 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:14.944806 kubelet[1863]: I0113 21:28:14.944368 1863 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:28:14.944806 kubelet[1863]: I0113 21:28:14.944425 1863 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:28:15.283225 kubelet[1863]: I0113 21:28:15.283047 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d7zd4" podStartSLOduration=29.209211045 podStartE2EDuration="39.283015608s" podCreationTimestamp="2025-01-13 21:27:36 +0000 UTC" firstStartedPulling="2025-01-13 21:28:04.075567093 +0000 UTC m=+27.786545400" lastFinishedPulling="2025-01-13 21:28:14.149371666 +0000 UTC m=+37.860349963" observedRunningTime="2025-01-13 21:28:15.281520658 +0000 UTC m=+38.992499006" watchObservedRunningTime="2025-01-13 21:28:15.283015608 +0000 UTC m=+38.993993955" Jan 13 21:28:15.283591 kubelet[1863]: I0113 21:28:15.283508 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-vt9gg" podStartSLOduration=13.160002136 podStartE2EDuration="19.283490428s" podCreationTimestamp="2025-01-13 21:27:56 +0000 UTC" firstStartedPulling="2025-01-13 21:28:03.83356605 +0000 UTC m=+27.544544347" lastFinishedPulling="2025-01-13 21:28:09.957054312 +0000 UTC m=+33.668032639" observedRunningTime="2025-01-13 21:28:10.248288791 +0000 UTC m=+33.959267139" watchObservedRunningTime="2025-01-13 21:28:15.283490428 +0000 UTC m=+38.994468775" Jan 13 21:28:15.829053 kubelet[1863]: E0113 21:28:15.828923 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:16.800989 kubelet[1863]: E0113 21:28:16.800861 1863 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:16.829901 kubelet[1863]: E0113 21:28:16.829827 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:17.830878 kubelet[1863]: E0113 21:28:17.830786 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:18.831408 kubelet[1863]: E0113 21:28:18.831330 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:19.016175 kubelet[1863]: I0113 21:28:19.015836 1863 topology_manager.go:215] "Topology Admit Handler" podUID="31f919ec-a970-4e88-9f9c-abea9aac905e" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:28:19.030155 systemd[1]: Created slice kubepods-besteffort-pod31f919ec_a970_4e88_9f9c_abea9aac905e.slice - libcontainer container kubepods-besteffort-pod31f919ec_a970_4e88_9f9c_abea9aac905e.slice. Jan 13 21:28:19.118237 kubelet[1863]: I0113 21:28:19.117571 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/31f919ec-a970-4e88-9f9c-abea9aac905e-data\") pod \"nfs-server-provisioner-0\" (UID: \"31f919ec-a970-4e88-9f9c-abea9aac905e\") " pod="default/nfs-server-provisioner-0" Jan 13 21:28:19.118237 kubelet[1863]: I0113 21:28:19.117662 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9b7h\" (UniqueName: \"kubernetes.io/projected/31f919ec-a970-4e88-9f9c-abea9aac905e-kube-api-access-b9b7h\") pod \"nfs-server-provisioner-0\" (UID: \"31f919ec-a970-4e88-9f9c-abea9aac905e\") " pod="default/nfs-server-provisioner-0" Jan 13 21:28:19.338064 containerd[1472]: time="2025-01-13T21:28:19.337986081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:31f919ec-a970-4e88-9f9c-abea9aac905e,Namespace:default,Attempt:0,}" Jan 13 21:28:19.610629 systemd-networkd[1378]: cali60e51b789ff: Link UP Jan 13 21:28:19.613520 systemd-networkd[1378]: cali60e51b789ff: Gained carrier Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.454 [INFO][3487] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.197-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 31f919ec-a970-4e88-9f9c-abea9aac905e 1331 0 2025-01-13 21:28:18 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.197 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.454 [INFO][3487] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.517 [INFO][3498] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" HandleID="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Workload="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.542 [INFO][3498] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" HandleID="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Workload="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265620), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.197", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 21:28:19.517730402 +0000 UTC"}, Hostname:"172.24.4.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.542 [INFO][3498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.542 [INFO][3498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.542 [INFO][3498] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.197' Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.545 [INFO][3498] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.556 [INFO][3498] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.566 [INFO][3498] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.571 [INFO][3498] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.576 [INFO][3498] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.576 [INFO][3498] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.580 [INFO][3498] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.588 [INFO][3498] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.600 [INFO][3498] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.67/26] block=192.168.83.64/26 handle="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.601 [INFO][3498] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.67/26] handle="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" host="172.24.4.197" Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.601 [INFO][3498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:19.637074 containerd[1472]: 2025-01-13 21:28:19.601 [INFO][3498] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.67/26] IPv6=[] ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" HandleID="k8s-pod-network.bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Workload="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:28:19.638786 containerd[1472]: 2025-01-13 21:28:19.604 [INFO][3487] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"31f919ec-a970-4e88-9f9c-abea9aac905e", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:19.638786 containerd[1472]: 2025-01-13 21:28:19.605 [INFO][3487] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.67/32] ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:28:19.638786 containerd[1472]: 2025-01-13 21:28:19.605 [INFO][3487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:28:19.638786 containerd[1472]: 2025-01-13 21:28:19.614 [INFO][3487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:28:19.640269 containerd[1472]: 2025-01-13 21:28:19.616 [INFO][3487] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"31f919ec-a970-4e88-9f9c-abea9aac905e", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"4e:cd:a5:99:23:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:19.640269 containerd[1472]: 2025-01-13 21:28:19.634 [INFO][3487] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.197-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:28:19.685169 containerd[1472]: time="2025-01-13T21:28:19.684753491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:19.685169 containerd[1472]: time="2025-01-13T21:28:19.684827892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:19.685169 containerd[1472]: time="2025-01-13T21:28:19.684880021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:19.685169 containerd[1472]: time="2025-01-13T21:28:19.684962376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:19.710303 systemd[1]: Started cri-containerd-bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f.scope - libcontainer container bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f. Jan 13 21:28:19.749834 containerd[1472]: time="2025-01-13T21:28:19.749774475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:31f919ec-a970-4e88-9f9c-abea9aac905e,Namespace:default,Attempt:0,} returns sandbox id \"bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f\"" Jan 13 21:28:19.752239 containerd[1472]: time="2025-01-13T21:28:19.751997334Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:28:19.832469 kubelet[1863]: E0113 21:28:19.832371 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:20.240379 systemd[1]: run-containerd-runc-k8s.io-bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f-runc.SM6wzK.mount: Deactivated successfully. Jan 13 21:28:20.495571 systemd[1]: run-containerd-runc-k8s.io-c4c4bcd5dd837fe43b372f1460a0f824e06569990aec830964a45d282091f7f1-runc.M2N5Ps.mount: Deactivated successfully. Jan 13 21:28:20.833249 kubelet[1863]: E0113 21:28:20.833153 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:20.997245 systemd-networkd[1378]: cali60e51b789ff: Gained IPv6LL Jan 13 21:28:21.833837 kubelet[1863]: E0113 21:28:21.833798 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:22.832816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517093491.mount: Deactivated successfully. Jan 13 21:28:22.834464 kubelet[1863]: E0113 21:28:22.834439 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:23.835453 kubelet[1863]: E0113 21:28:23.835392 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:24.837080 kubelet[1863]: E0113 21:28:24.837033 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:25.837564 kubelet[1863]: E0113 21:28:25.837296 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:26.167312 containerd[1472]: time="2025-01-13T21:28:26.167197039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:26.169199 containerd[1472]: time="2025-01-13T21:28:26.169161740Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 13 21:28:26.169702 containerd[1472]: time="2025-01-13T21:28:26.169646283Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:26.174350 containerd[1472]: time="2025-01-13T21:28:26.173168330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:26.174350 containerd[1472]: time="2025-01-13T21:28:26.174219600Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.422194153s" Jan 13 21:28:26.174350 containerd[1472]: time="2025-01-13T21:28:26.174258052Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:28:26.176898 containerd[1472]: time="2025-01-13T21:28:26.176872950Z" level=info msg="CreateContainer within sandbox \"bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:28:26.190142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94497583.mount: Deactivated successfully. Jan 13 21:28:26.197007 containerd[1472]: time="2025-01-13T21:28:26.196961620Z" level=info msg="CreateContainer within sandbox \"bf2fbf2a2ee03900055671153148c9224ba8af55e652d612eaf0c3bfdce3dc1f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9de04941be0808e28dc8e9f704a051e965bd17bedbb81cfb5ee0e784e5c31edb\"" Jan 13 21:28:26.197716 containerd[1472]: time="2025-01-13T21:28:26.197554307Z" level=info msg="StartContainer for \"9de04941be0808e28dc8e9f704a051e965bd17bedbb81cfb5ee0e784e5c31edb\"" Jan 13 21:28:26.239263 systemd[1]: Started cri-containerd-9de04941be0808e28dc8e9f704a051e965bd17bedbb81cfb5ee0e784e5c31edb.scope - libcontainer container 9de04941be0808e28dc8e9f704a051e965bd17bedbb81cfb5ee0e784e5c31edb. Jan 13 21:28:26.266260 containerd[1472]: time="2025-01-13T21:28:26.266209987Z" level=info msg="StartContainer for \"9de04941be0808e28dc8e9f704a051e965bd17bedbb81cfb5ee0e784e5c31edb\" returns successfully" Jan 13 21:28:26.332676 kubelet[1863]: I0113 21:28:26.332476 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.908879156 podStartE2EDuration="8.332460404s" podCreationTimestamp="2025-01-13 21:28:18 +0000 UTC" firstStartedPulling="2025-01-13 21:28:19.751609381 +0000 UTC m=+43.462587678" lastFinishedPulling="2025-01-13 21:28:26.175190629 +0000 UTC m=+49.886168926" observedRunningTime="2025-01-13 21:28:26.332058246 +0000 UTC m=+50.043036553" watchObservedRunningTime="2025-01-13 21:28:26.332460404 +0000 UTC m=+50.043438711" Jan 13 21:28:26.838025 kubelet[1863]: E0113 21:28:26.837965 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:27.838472 kubelet[1863]: E0113 21:28:27.838392 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:28.839072 kubelet[1863]: E0113 21:28:28.838996 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:29.840321 kubelet[1863]: E0113 21:28:29.840195 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:30.841527 kubelet[1863]: E0113 21:28:30.841400 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:31.841919 kubelet[1863]: E0113 21:28:31.841831 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:32.843807 kubelet[1863]: E0113 21:28:32.842874 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:33.843281 kubelet[1863]: E0113 21:28:33.843184 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:34.844352 kubelet[1863]: E0113 21:28:34.844232 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:35.845410 kubelet[1863]: E0113 21:28:35.845257 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:36.801212 kubelet[1863]: E0113 21:28:36.801078 1863 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:36.845931 kubelet[1863]: E0113 21:28:36.845790 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:36.875070 containerd[1472]: time="2025-01-13T21:28:36.874880751Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:28:36.875888 containerd[1472]: time="2025-01-13T21:28:36.875111474Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:28:36.875888 containerd[1472]: time="2025-01-13T21:28:36.875190233Z" level=info msg="StopPodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:28:36.877580 containerd[1472]: time="2025-01-13T21:28:36.876800620Z" level=info msg="RemovePodSandbox for \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:28:36.877580 containerd[1472]: time="2025-01-13T21:28:36.876863027Z" level=info msg="Forcibly stopping sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\"" Jan 13 21:28:36.877580 containerd[1472]: time="2025-01-13T21:28:36.877002871Z" level=info msg="TearDown network for sandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" successfully" Jan 13 21:28:36.890171 containerd[1472]: time="2025-01-13T21:28:36.889186006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.890171 containerd[1472]: time="2025-01-13T21:28:36.889338161Z" level=info msg="RemovePodSandbox \"fe0697aac56e3740073e570e34d5ad0da83612aa33e4a41aa9ca2f0572a6eb52\" returns successfully" Jan 13 21:28:36.891663 containerd[1472]: time="2025-01-13T21:28:36.891483094Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:28:36.892780 containerd[1472]: time="2025-01-13T21:28:36.892680306Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:28:36.893369 containerd[1472]: time="2025-01-13T21:28:36.893029542Z" level=info msg="StopPodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:28:36.895631 containerd[1472]: time="2025-01-13T21:28:36.895359532Z" level=info msg="RemovePodSandbox for \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:28:36.895631 containerd[1472]: time="2025-01-13T21:28:36.895418353Z" level=info msg="Forcibly stopping sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\"" Jan 13 21:28:36.895631 containerd[1472]: time="2025-01-13T21:28:36.895572483Z" level=info msg="TearDown network for sandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" successfully" Jan 13 21:28:36.902215 containerd[1472]: time="2025-01-13T21:28:36.901718898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.902215 containerd[1472]: time="2025-01-13T21:28:36.901872076Z" level=info msg="RemovePodSandbox \"48dfaf80c356f42260c4efb9f0e7f8a0a42bbf59518167fb310185698ef3e50a\" returns successfully" Jan 13 21:28:36.903350 containerd[1472]: time="2025-01-13T21:28:36.903281656Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:28:36.903497 containerd[1472]: time="2025-01-13T21:28:36.903451606Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:28:36.903497 containerd[1472]: time="2025-01-13T21:28:36.903486892Z" level=info msg="StopPodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:28:36.904550 containerd[1472]: time="2025-01-13T21:28:36.904366897Z" level=info msg="RemovePodSandbox for \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:28:36.904669 containerd[1472]: time="2025-01-13T21:28:36.904509123Z" level=info msg="Forcibly stopping sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\"" Jan 13 21:28:36.904894 containerd[1472]: time="2025-01-13T21:28:36.904763533Z" level=info msg="TearDown network for sandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" successfully" Jan 13 21:28:36.910072 containerd[1472]: time="2025-01-13T21:28:36.909982504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.910233 containerd[1472]: time="2025-01-13T21:28:36.910077844Z" level=info msg="RemovePodSandbox \"97ef16953337f0b8fe847821da8d85b7009c837976d678be2572756a005003e3\" returns successfully" Jan 13 21:28:36.911531 containerd[1472]: time="2025-01-13T21:28:36.911451436Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:28:36.911992 containerd[1472]: time="2025-01-13T21:28:36.911826661Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:28:36.911992 containerd[1472]: time="2025-01-13T21:28:36.911865584Z" level=info msg="StopPodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:28:36.912715 containerd[1472]: time="2025-01-13T21:28:36.912651072Z" level=info msg="RemovePodSandbox for \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:28:36.912715 containerd[1472]: time="2025-01-13T21:28:36.912705774Z" level=info msg="Forcibly stopping sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\"" Jan 13 21:28:36.913053 containerd[1472]: time="2025-01-13T21:28:36.912836870Z" level=info msg="TearDown network for sandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" successfully" Jan 13 21:28:36.918548 containerd[1472]: time="2025-01-13T21:28:36.918428824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.918679 containerd[1472]: time="2025-01-13T21:28:36.918571532Z" level=info msg="RemovePodSandbox \"701f2965067b40ca49f581d09b3353f94584447682c157a33c4e38f6e07feace\" returns successfully" Jan 13 21:28:36.920037 containerd[1472]: time="2025-01-13T21:28:36.919337973Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:28:36.920037 containerd[1472]: time="2025-01-13T21:28:36.919502973Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:28:36.920037 containerd[1472]: time="2025-01-13T21:28:36.919530114Z" level=info msg="StopPodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:28:36.920345 containerd[1472]: time="2025-01-13T21:28:36.920045783Z" level=info msg="RemovePodSandbox for \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:28:36.920345 containerd[1472]: time="2025-01-13T21:28:36.920092001Z" level=info msg="Forcibly stopping sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\"" Jan 13 21:28:36.920345 containerd[1472]: time="2025-01-13T21:28:36.920271237Z" level=info msg="TearDown network for sandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" successfully" Jan 13 21:28:36.925605 containerd[1472]: time="2025-01-13T21:28:36.925475882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.925605 containerd[1472]: time="2025-01-13T21:28:36.925586511Z" level=info msg="RemovePodSandbox \"b450bc9cc542ba82658b73623ff990799c4e0ef1550b6112a7cef1b2d1d2e7d3\" returns successfully" Jan 13 21:28:36.929556 containerd[1472]: time="2025-01-13T21:28:36.926745049Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:28:36.929556 containerd[1472]: time="2025-01-13T21:28:36.926925809Z" level=info msg="TearDown network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" successfully" Jan 13 21:28:36.929556 containerd[1472]: time="2025-01-13T21:28:36.926953541Z" level=info msg="StopPodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" returns successfully" Jan 13 21:28:36.930255 containerd[1472]: time="2025-01-13T21:28:36.930207018Z" level=info msg="RemovePodSandbox for \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:28:36.932182 containerd[1472]: time="2025-01-13T21:28:36.930742664Z" level=info msg="Forcibly stopping sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\"" Jan 13 21:28:36.932182 containerd[1472]: time="2025-01-13T21:28:36.930897496Z" level=info msg="TearDown network for sandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" successfully" Jan 13 21:28:36.940518 containerd[1472]: time="2025-01-13T21:28:36.940307557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.940989 containerd[1472]: time="2025-01-13T21:28:36.940945096Z" level=info msg="RemovePodSandbox \"c52f345b317b73a9b8df8b70ca09a2a701750e97c84d5f00380a2ca529f5b61a\" returns successfully" Jan 13 21:28:36.941957 containerd[1472]: time="2025-01-13T21:28:36.941898879Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" Jan 13 21:28:36.942164 containerd[1472]: time="2025-01-13T21:28:36.942082845Z" level=info msg="TearDown network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" successfully" Jan 13 21:28:36.942268 containerd[1472]: time="2025-01-13T21:28:36.942163276Z" level=info msg="StopPodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" returns successfully" Jan 13 21:28:36.943612 containerd[1472]: time="2025-01-13T21:28:36.943569289Z" level=info msg="RemovePodSandbox for \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" Jan 13 21:28:36.943834 containerd[1472]: time="2025-01-13T21:28:36.943799112Z" level=info msg="Forcibly stopping sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\"" Jan 13 21:28:36.944396 containerd[1472]: time="2025-01-13T21:28:36.944309881Z" level=info msg="TearDown network for sandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" successfully" Jan 13 21:28:36.950202 containerd[1472]: time="2025-01-13T21:28:36.949971145Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.950202 containerd[1472]: time="2025-01-13T21:28:36.950056435Z" level=info msg="RemovePodSandbox \"db66b78aeb947ed4f0e1422b7ee637595d72d3b52b422cfc398be1061f24456f\" returns successfully" Jan 13 21:28:36.951221 containerd[1472]: time="2025-01-13T21:28:36.950866178Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\"" Jan 13 21:28:36.951221 containerd[1472]: time="2025-01-13T21:28:36.951047178Z" level=info msg="TearDown network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" successfully" Jan 13 21:28:36.951221 containerd[1472]: time="2025-01-13T21:28:36.951076273Z" level=info msg="StopPodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" returns successfully" Jan 13 21:28:36.952231 containerd[1472]: time="2025-01-13T21:28:36.951868141Z" level=info msg="RemovePodSandbox for \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\"" Jan 13 21:28:36.952231 containerd[1472]: time="2025-01-13T21:28:36.951921472Z" level=info msg="Forcibly stopping sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\"" Jan 13 21:28:36.952231 containerd[1472]: time="2025-01-13T21:28:36.952047178Z" level=info msg="TearDown network for sandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" successfully" Jan 13 21:28:36.957359 containerd[1472]: time="2025-01-13T21:28:36.957250060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.957359 containerd[1472]: time="2025-01-13T21:28:36.957340730Z" level=info msg="RemovePodSandbox \"c876d1dbea075dcff6bfd8060bccb523df7561f1e988cb9c380a15a6d063833e\" returns successfully" Jan 13 21:28:36.958512 containerd[1472]: time="2025-01-13T21:28:36.958017603Z" level=info msg="StopPodSandbox for \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\"" Jan 13 21:28:36.958512 containerd[1472]: time="2025-01-13T21:28:36.958229651Z" level=info msg="TearDown network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" successfully" Jan 13 21:28:36.958512 containerd[1472]: time="2025-01-13T21:28:36.958261882Z" level=info msg="StopPodSandbox for \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" returns successfully" Jan 13 21:28:36.959115 containerd[1472]: time="2025-01-13T21:28:36.959049052Z" level=info msg="RemovePodSandbox for \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\"" Jan 13 21:28:36.959403 containerd[1472]: time="2025-01-13T21:28:36.959362461Z" level=info msg="Forcibly stopping sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\"" Jan 13 21:28:36.959749 containerd[1472]: time="2025-01-13T21:28:36.959639793Z" level=info msg="TearDown network for sandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" successfully" Jan 13 21:28:36.964525 containerd[1472]: time="2025-01-13T21:28:36.964404611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.964525 containerd[1472]: time="2025-01-13T21:28:36.964483399Z" level=info msg="RemovePodSandbox \"6b02491e4eb3379d8e9ea97dea8bb25441e2f6876ab4ac35e454a965cbde5b98\" returns successfully" Jan 13 21:28:36.965146 containerd[1472]: time="2025-01-13T21:28:36.964986174Z" level=info msg="StopPodSandbox for \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\"" Jan 13 21:28:36.965244 containerd[1472]: time="2025-01-13T21:28:36.965194255Z" level=info msg="TearDown network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\" successfully" Jan 13 21:28:36.965244 containerd[1472]: time="2025-01-13T21:28:36.965222939Z" level=info msg="StopPodSandbox for \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\" returns successfully" Jan 13 21:28:36.966238 containerd[1472]: time="2025-01-13T21:28:36.965500672Z" level=info msg="RemovePodSandbox for \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\"" Jan 13 21:28:36.966238 containerd[1472]: time="2025-01-13T21:28:36.965521400Z" level=info msg="Forcibly stopping sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\"" Jan 13 21:28:36.966238 containerd[1472]: time="2025-01-13T21:28:36.965581082Z" level=info msg="TearDown network for sandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\" successfully" Jan 13 21:28:36.968941 containerd[1472]: time="2025-01-13T21:28:36.968890344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.968941 containerd[1472]: time="2025-01-13T21:28:36.968938775Z" level=info msg="RemovePodSandbox \"85fc718849fb74d08c5d8e43f63e30b1a9c857d04f77ce7d9ddec406a80bbc3a\" returns successfully" Jan 13 21:28:36.969677 containerd[1472]: time="2025-01-13T21:28:36.969316034Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:28:36.969677 containerd[1472]: time="2025-01-13T21:28:36.969459234Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:28:36.969677 containerd[1472]: time="2025-01-13T21:28:36.969483970Z" level=info msg="StopPodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:28:36.970101 containerd[1472]: time="2025-01-13T21:28:36.970060774Z" level=info msg="RemovePodSandbox for \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:28:36.970355 containerd[1472]: time="2025-01-13T21:28:36.970318519Z" level=info msg="Forcibly stopping sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\"" Jan 13 21:28:36.971320 containerd[1472]: time="2025-01-13T21:28:36.970558501Z" level=info msg="TearDown network for sandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" successfully" Jan 13 21:28:36.974219 containerd[1472]: time="2025-01-13T21:28:36.974114015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.974219 containerd[1472]: time="2025-01-13T21:28:36.974216928Z" level=info msg="RemovePodSandbox \"a9d155b2da8c600eb046976b4e268ccce337e7dc2125c534089750e4944ad5ae\" returns successfully" Jan 13 21:28:36.975530 containerd[1472]: time="2025-01-13T21:28:36.975014928Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:28:36.975530 containerd[1472]: time="2025-01-13T21:28:36.975209715Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:28:36.975530 containerd[1472]: time="2025-01-13T21:28:36.975238990Z" level=info msg="StopPodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:28:36.976274 containerd[1472]: time="2025-01-13T21:28:36.976154451Z" level=info msg="RemovePodSandbox for \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:28:36.976274 containerd[1472]: time="2025-01-13T21:28:36.976201369Z" level=info msg="Forcibly stopping sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\"" Jan 13 21:28:36.976433 containerd[1472]: time="2025-01-13T21:28:36.976316355Z" level=info msg="TearDown network for sandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" successfully" Jan 13 21:28:36.980471 containerd[1472]: time="2025-01-13T21:28:36.980411044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.980560 containerd[1472]: time="2025-01-13T21:28:36.980485233Z" level=info msg="RemovePodSandbox \"6bdeec65740d17e38f10f4a7013ccbcce164732a0a31f172e29e32c1e8ac61f8\" returns successfully" Jan 13 21:28:36.981163 containerd[1472]: time="2025-01-13T21:28:36.980855800Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:28:36.981163 containerd[1472]: time="2025-01-13T21:28:36.980934267Z" level=info msg="TearDown network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" successfully" Jan 13 21:28:36.981163 containerd[1472]: time="2025-01-13T21:28:36.980947021Z" level=info msg="StopPodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" returns successfully" Jan 13 21:28:36.981459 containerd[1472]: time="2025-01-13T21:28:36.981442011Z" level=info msg="RemovePodSandbox for \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:28:36.981731 containerd[1472]: time="2025-01-13T21:28:36.981610909Z" level=info msg="Forcibly stopping sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\"" Jan 13 21:28:36.981731 containerd[1472]: time="2025-01-13T21:28:36.981675580Z" level=info msg="TearDown network for sandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" successfully" Jan 13 21:28:36.984886 containerd[1472]: time="2025-01-13T21:28:36.984581495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.984886 containerd[1472]: time="2025-01-13T21:28:36.984633532Z" level=info msg="RemovePodSandbox \"25b24b4fa756b05026ba160c0d9484fe8be6eee7c6694e19cbb8a815da267260\" returns successfully" Jan 13 21:28:36.985215 containerd[1472]: time="2025-01-13T21:28:36.985088728Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" Jan 13 21:28:36.985429 containerd[1472]: time="2025-01-13T21:28:36.985295957Z" level=info msg="TearDown network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" successfully" Jan 13 21:28:36.985429 containerd[1472]: time="2025-01-13T21:28:36.985311527Z" level=info msg="StopPodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" returns successfully" Jan 13 21:28:36.985788 containerd[1472]: time="2025-01-13T21:28:36.985719744Z" level=info msg="RemovePodSandbox for \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" Jan 13 21:28:36.985788 containerd[1472]: time="2025-01-13T21:28:36.985764608Z" level=info msg="Forcibly stopping sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\"" Jan 13 21:28:36.985928 containerd[1472]: time="2025-01-13T21:28:36.985866760Z" level=info msg="TearDown network for sandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" successfully" Jan 13 21:28:36.989832 containerd[1472]: time="2025-01-13T21:28:36.989778825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.989899 containerd[1472]: time="2025-01-13T21:28:36.989863926Z" level=info msg="RemovePodSandbox \"f42b1ecf4c7b18a83b0f63a1330fcd8672c62501e73c6ed75547501b4261c0ca\" returns successfully" Jan 13 21:28:36.990469 containerd[1472]: time="2025-01-13T21:28:36.990416785Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\"" Jan 13 21:28:36.990622 containerd[1472]: time="2025-01-13T21:28:36.990578459Z" level=info msg="TearDown network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" successfully" Jan 13 21:28:36.990686 containerd[1472]: time="2025-01-13T21:28:36.990614707Z" level=info msg="StopPodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" returns successfully" Jan 13 21:28:36.991730 containerd[1472]: time="2025-01-13T21:28:36.990943435Z" level=info msg="RemovePodSandbox for \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\"" Jan 13 21:28:36.991730 containerd[1472]: time="2025-01-13T21:28:36.990967330Z" level=info msg="Forcibly stopping sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\"" Jan 13 21:28:36.991730 containerd[1472]: time="2025-01-13T21:28:36.991029897Z" level=info msg="TearDown network for sandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" successfully" Jan 13 21:28:36.993687 containerd[1472]: time="2025-01-13T21:28:36.993659391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.993790 containerd[1472]: time="2025-01-13T21:28:36.993773356Z" level=info msg="RemovePodSandbox \"471c4c0ae1291cbc6e1983f61e89094560e956929cea7d874b12026a47af5eec\" returns successfully" Jan 13 21:28:36.994188 containerd[1472]: time="2025-01-13T21:28:36.994168538Z" level=info msg="StopPodSandbox for \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\"" Jan 13 21:28:36.994436 containerd[1472]: time="2025-01-13T21:28:36.994418638Z" level=info msg="TearDown network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" successfully" Jan 13 21:28:36.994504 containerd[1472]: time="2025-01-13T21:28:36.994489662Z" level=info msg="StopPodSandbox for \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" returns successfully" Jan 13 21:28:36.994943 containerd[1472]: time="2025-01-13T21:28:36.994894043Z" level=info msg="RemovePodSandbox for \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\"" Jan 13 21:28:36.994990 containerd[1472]: time="2025-01-13T21:28:36.994948885Z" level=info msg="Forcibly stopping sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\"" Jan 13 21:28:36.995184 containerd[1472]: time="2025-01-13T21:28:36.995069191Z" level=info msg="TearDown network for sandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" successfully" Jan 13 21:28:36.999057 containerd[1472]: time="2025-01-13T21:28:36.998996104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:36.999138 containerd[1472]: time="2025-01-13T21:28:36.999070124Z" level=info msg="RemovePodSandbox \"5cc7099765758f9cfb415a757abb45d35195b82cb1972ead1d54758b048b182e\" returns successfully" Jan 13 21:28:36.999838 containerd[1472]: time="2025-01-13T21:28:36.999692795Z" level=info msg="StopPodSandbox for \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\"" Jan 13 21:28:36.999838 containerd[1472]: time="2025-01-13T21:28:36.999771152Z" level=info msg="TearDown network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\" successfully" Jan 13 21:28:36.999838 containerd[1472]: time="2025-01-13T21:28:36.999783395Z" level=info msg="StopPodSandbox for \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\" returns successfully" Jan 13 21:28:37.000390 containerd[1472]: time="2025-01-13T21:28:37.000107413Z" level=info msg="RemovePodSandbox for \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\"" Jan 13 21:28:37.000390 containerd[1472]: time="2025-01-13T21:28:37.000273546Z" level=info msg="Forcibly stopping sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\"" Jan 13 21:28:37.000390 containerd[1472]: time="2025-01-13T21:28:37.000332617Z" level=info msg="TearDown network for sandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\" successfully" Jan 13 21:28:37.003470 containerd[1472]: time="2025-01-13T21:28:37.003363966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:28:37.003470 containerd[1472]: time="2025-01-13T21:28:37.003400034Z" level=info msg="RemovePodSandbox \"8b6a798d74a5dcfbfb13671be1a6da29a9a66c103a927be354edd2ffe217e5e3\" returns successfully" Jan 13 21:28:37.846630 kubelet[1863]: E0113 21:28:37.846546 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:38.847276 kubelet[1863]: E0113 21:28:38.847177 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:39.848524 kubelet[1863]: E0113 21:28:39.848421 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:40.849659 kubelet[1863]: E0113 21:28:40.849564 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:41.850433 kubelet[1863]: E0113 21:28:41.850357 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:42.851576 kubelet[1863]: E0113 21:28:42.851458 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:43.852340 kubelet[1863]: E0113 21:28:43.852265 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:44.853369 kubelet[1863]: E0113 21:28:44.853260 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:45.854344 kubelet[1863]: E0113 21:28:45.854064 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:46.854886 kubelet[1863]: E0113 21:28:46.854773 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:47.855463 kubelet[1863]: E0113 21:28:47.855378 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:48.856043 kubelet[1863]: E0113 21:28:48.855987 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:49.857299 kubelet[1863]: E0113 21:28:49.857183 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:50.781995 kubelet[1863]: I0113 21:28:50.781933 1863 topology_manager.go:215] "Topology Admit Handler" podUID="50d488a7-c181-45c4-a1cb-169eeb4c31e6" podNamespace="default" podName="test-pod-1" Jan 13 21:28:50.796414 systemd[1]: Created slice kubepods-besteffort-pod50d488a7_c181_45c4_a1cb_169eeb4c31e6.slice - libcontainer container kubepods-besteffort-pod50d488a7_c181_45c4_a1cb_169eeb4c31e6.slice. Jan 13 21:28:50.857798 kubelet[1863]: E0113 21:28:50.857714 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:50.934012 kubelet[1863]: I0113 21:28:50.933839 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7abb3366-6521-416c-8d7b-6c0cca53f440\" (UniqueName: \"kubernetes.io/nfs/50d488a7-c181-45c4-a1cb-169eeb4c31e6-pvc-7abb3366-6521-416c-8d7b-6c0cca53f440\") pod \"test-pod-1\" (UID: \"50d488a7-c181-45c4-a1cb-169eeb4c31e6\") " pod="default/test-pod-1" Jan 13 21:28:50.934323 kubelet[1863]: I0113 21:28:50.934090 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmb6x\" (UniqueName: \"kubernetes.io/projected/50d488a7-c181-45c4-a1cb-169eeb4c31e6-kube-api-access-tmb6x\") pod \"test-pod-1\" (UID: \"50d488a7-c181-45c4-a1cb-169eeb4c31e6\") " pod="default/test-pod-1" Jan 13 21:28:51.096239 kernel: FS-Cache: Loaded Jan 13 21:28:51.193176 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:28:51.193326 kernel: RPC: Registered udp transport module. Jan 13 21:28:51.193374 kernel: RPC: Registered tcp transport module. Jan 13 21:28:51.194764 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:28:51.194904 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:28:51.549306 kernel: NFS: Registering the id_resolver key type Jan 13 21:28:51.549413 kernel: Key type id_resolver registered Jan 13 21:28:51.550855 kernel: Key type id_legacy registered Jan 13 21:28:51.614387 nfsidmap[3740]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 13 21:28:51.622068 nfsidmap[3741]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 13 21:28:51.702457 containerd[1472]: time="2025-01-13T21:28:51.702250685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:50d488a7-c181-45c4-a1cb-169eeb4c31e6,Namespace:default,Attempt:0,}" Jan 13 21:28:51.858930 kubelet[1863]: E0113 21:28:51.858751 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:51.919952 systemd-networkd[1378]: cali5ec59c6bf6e: Link UP Jan 13 21:28:51.920461 systemd-networkd[1378]: cali5ec59c6bf6e: Gained carrier Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.800 [INFO][3743] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.197-k8s-test--pod--1-eth0 default 50d488a7-c181-45c4-a1cb-169eeb4c31e6 1436 0 2025-01-13 21:28:21 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.197 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.800 [INFO][3743] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-eth0" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.853 [INFO][3753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" HandleID="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Workload="172.24.4.197-k8s-test--pod--1-eth0" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.870 [INFO][3753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" HandleID="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Workload="172.24.4.197-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000383a60), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.197", "pod":"test-pod-1", "timestamp":"2025-01-13 21:28:51.852995146 +0000 UTC"}, Hostname:"172.24.4.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.870 [INFO][3753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.870 [INFO][3753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.870 [INFO][3753] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.197' Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.873 [INFO][3753] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.880 [INFO][3753] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.887 [INFO][3753] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.889 [INFO][3753] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.892 [INFO][3753] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.892 [INFO][3753] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.894 [INFO][3753] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068 Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.901 [INFO][3753] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.911 [INFO][3753] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.68/26] block=192.168.83.64/26 handle="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.911 [INFO][3753] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.68/26] handle="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" host="172.24.4.197" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.911 [INFO][3753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.911 [INFO][3753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.68/26] IPv6=[] ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" HandleID="k8s-pod-network.a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Workload="172.24.4.197-k8s-test--pod--1-eth0" Jan 13 21:28:51.936062 containerd[1472]: 2025-01-13 21:28:51.913 [INFO][3743] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"50d488a7-c181-45c4-a1cb-169eeb4c31e6", ResourceVersion:"1436", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 28, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:51.938659 containerd[1472]: 2025-01-13 21:28:51.913 [INFO][3743] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.68/32] ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-eth0" Jan 13 21:28:51.938659 containerd[1472]: 2025-01-13 21:28:51.914 [INFO][3743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-eth0" Jan 13 21:28:51.938659 containerd[1472]: 2025-01-13 21:28:51.922 [INFO][3743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-eth0" Jan 13 21:28:51.938659 containerd[1472]: 2025-01-13 21:28:51.922 [INFO][3743] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.197-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"50d488a7-c181-45c4-a1cb-169eeb4c31e6", ResourceVersion:"1436", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 28, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.197", ContainerID:"a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"9a:7d:85:5e:09:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:28:51.938659 containerd[1472]: 2025-01-13 21:28:51.933 [INFO][3743] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.197-k8s-test--pod--1-eth0" Jan 13 21:28:51.978646 containerd[1472]: time="2025-01-13T21:28:51.977850538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:51.978646 containerd[1472]: time="2025-01-13T21:28:51.977976304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:51.978646 containerd[1472]: time="2025-01-13T21:28:51.978044211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:51.980110 containerd[1472]: time="2025-01-13T21:28:51.979767787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:52.012263 systemd[1]: Started cri-containerd-a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068.scope - libcontainer container a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068. Jan 13 21:28:52.056588 containerd[1472]: time="2025-01-13T21:28:52.056535782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:50d488a7-c181-45c4-a1cb-169eeb4c31e6,Namespace:default,Attempt:0,} returns sandbox id \"a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068\"" Jan 13 21:28:52.058895 containerd[1472]: time="2025-01-13T21:28:52.058648367Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:28:52.684536 containerd[1472]: time="2025-01-13T21:28:52.684349955Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:52.686092 containerd[1472]: time="2025-01-13T21:28:52.685974054Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:28:52.694494 containerd[1472]: time="2025-01-13T21:28:52.694429716Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 635.737125ms" Jan 13 21:28:52.694640 containerd[1472]: time="2025-01-13T21:28:52.694500399Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:28:52.699543 containerd[1472]: time="2025-01-13T21:28:52.699460028Z" level=info msg="CreateContainer within sandbox \"a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:28:52.731971 containerd[1472]: time="2025-01-13T21:28:52.731903439Z" level=info msg="CreateContainer within sandbox \"a73185476f06587eec81d36a18aa6762a13aeb807999d64b6696de9a07e43068\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3447bd2e8fe002950f767a2023ae45ec275cf4fcff9b7a82728d4ee981d70175\"" Jan 13 21:28:52.733977 containerd[1472]: time="2025-01-13T21:28:52.733587951Z" level=info msg="StartContainer for \"3447bd2e8fe002950f767a2023ae45ec275cf4fcff9b7a82728d4ee981d70175\"" Jan 13 21:28:52.785437 systemd[1]: Started cri-containerd-3447bd2e8fe002950f767a2023ae45ec275cf4fcff9b7a82728d4ee981d70175.scope - libcontainer container 3447bd2e8fe002950f767a2023ae45ec275cf4fcff9b7a82728d4ee981d70175. Jan 13 21:28:52.821538 containerd[1472]: time="2025-01-13T21:28:52.821451752Z" level=info msg="StartContainer for \"3447bd2e8fe002950f767a2023ae45ec275cf4fcff9b7a82728d4ee981d70175\" returns successfully" Jan 13 21:28:52.859044 kubelet[1863]: E0113 21:28:52.858996 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:53.430177 kubelet[1863]: I0113 21:28:53.429975 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=31.792006006 podStartE2EDuration="32.429941448s" podCreationTimestamp="2025-01-13 21:28:21 +0000 UTC" firstStartedPulling="2025-01-13 21:28:52.058165111 +0000 UTC m=+75.769143418" lastFinishedPulling="2025-01-13 21:28:52.696100503 +0000 UTC m=+76.407078860" observedRunningTime="2025-01-13 21:28:53.429723859 +0000 UTC m=+77.140702206" watchObservedRunningTime="2025-01-13 21:28:53.429941448 +0000 UTC m=+77.140919795" Jan 13 21:28:53.636587 systemd-networkd[1378]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 21:28:53.859968 kubelet[1863]: E0113 21:28:53.859704 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:54.860381 kubelet[1863]: E0113 21:28:54.860290 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:55.860884 kubelet[1863]: E0113 21:28:55.860799 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:56.801616 kubelet[1863]: E0113 21:28:56.801553 1863 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:56.861170 kubelet[1863]: E0113 21:28:56.861086 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:57.862394 kubelet[1863]: E0113 21:28:57.862273 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:58.863256 kubelet[1863]: E0113 21:28:58.863189 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:28:59.864167 kubelet[1863]: E0113 21:28:59.864055 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:00.865438 kubelet[1863]: E0113 21:29:00.865329 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:01.866016 kubelet[1863]: E0113 21:29:01.865873 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:02.866793 kubelet[1863]: E0113 21:29:02.866700 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:03.868040 kubelet[1863]: E0113 21:29:03.867897 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:04.869060 kubelet[1863]: E0113 21:29:04.868966 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:05.869958 kubelet[1863]: E0113 21:29:05.869874 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"