Jan 29 11:24:48.087462 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 29 11:24:48.087516 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:24:48.087528 kernel: BIOS-provided physical RAM map: Jan 29 11:24:48.087537 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:24:48.087545 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:24:48.087557 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:24:48.087567 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 11:24:48.087576 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 11:24:48.087584 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:24:48.087593 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:24:48.087602 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 11:24:48.087610 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:24:48.087619 kernel: NX (Execute Disable) protection: active Jan 29 11:24:48.087628 kernel: APIC: Static calls initialized Jan 29 11:24:48.087640 kernel: SMBIOS 3.0.0 present. Jan 29 11:24:48.087650 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 11:24:48.087658 kernel: Hypervisor detected: KVM Jan 29 11:24:48.087667 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:24:48.087676 kernel: kvm-clock: using sched offset of 4053542604 cycles Jan 29 11:24:48.087689 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:24:48.087699 kernel: tsc: Detected 1996.249 MHz processor Jan 29 11:24:48.087709 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:24:48.087719 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:24:48.087728 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 11:24:48.087738 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:24:48.087747 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:24:48.087757 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 11:24:48.087766 kernel: ACPI: Early table checksum verification disabled Jan 29 11:24:48.087779 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 11:24:48.087788 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:48.087798 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:48.087807 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:48.087816 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 11:24:48.087826 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:48.087835 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:24:48.087844 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 11:24:48.087853 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 11:24:48.087866 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 11:24:48.087875 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 11:24:48.087885 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 11:24:48.087898 kernel: No NUMA configuration found Jan 29 11:24:48.087907 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 11:24:48.087919 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 11:24:48.087930 kernel: Zone ranges: Jan 29 11:24:48.087939 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:24:48.087948 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 11:24:48.087957 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 11:24:48.087966 kernel: Movable zone start for each node Jan 29 11:24:48.087975 kernel: Early memory node ranges Jan 29 11:24:48.087984 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:24:48.087993 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 11:24:48.088004 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 11:24:48.088013 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 11:24:48.088022 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:24:48.088031 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:24:48.088041 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 11:24:48.088050 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:24:48.088059 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:24:48.088068 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:24:48.088077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:24:48.088086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:24:48.088097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:24:48.088106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:24:48.088116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:24:48.088125 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:24:48.088134 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:24:48.088143 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:24:48.088152 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 11:24:48.088161 kernel: Booting paravirtualized kernel on KVM Jan 29 11:24:48.088172 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:24:48.088182 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:24:48.088191 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:24:48.088200 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:24:48.088209 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:24:48.088218 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 11:24:48.088228 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:24:48.088238 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:24:48.088249 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:24:48.088258 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:24:48.088267 kernel: Fallback order for Node 0: 0 Jan 29 11:24:48.088276 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 11:24:48.088285 kernel: Policy zone: Normal Jan 29 11:24:48.088294 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:24:48.088303 kernel: software IO TLB: area num 2. Jan 29 11:24:48.088313 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 229356K reserved, 0K cma-reserved) Jan 29 11:24:48.088322 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:24:48.089373 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 11:24:48.089384 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:24:48.089393 kernel: Dynamic Preempt: voluntary Jan 29 11:24:48.089402 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:24:48.089413 kernel: rcu: RCU event tracing is enabled. Jan 29 11:24:48.089422 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:24:48.089431 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:24:48.089440 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:24:48.089449 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:24:48.089460 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:24:48.089470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:24:48.089478 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:24:48.089488 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:24:48.089497 kernel: Console: colour VGA+ 80x25 Jan 29 11:24:48.089505 kernel: printk: console [tty0] enabled Jan 29 11:24:48.089514 kernel: printk: console [ttyS0] enabled Jan 29 11:24:48.089523 kernel: ACPI: Core revision 20230628 Jan 29 11:24:48.089532 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:24:48.089541 kernel: x2apic enabled Jan 29 11:24:48.089552 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:24:48.089561 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:24:48.089570 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:24:48.089579 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 11:24:48.089588 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 11:24:48.089597 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 11:24:48.089607 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:24:48.089616 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:24:48.089625 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:24:48.089636 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:24:48.089645 kernel: Speculative Store Bypass: Vulnerable Jan 29 11:24:48.089654 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 11:24:48.089663 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:24:48.089678 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:24:48.089690 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:24:48.089700 kernel: landlock: Up and running. Jan 29 11:24:48.089709 kernel: SELinux: Initializing. Jan 29 11:24:48.089718 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:24:48.089728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:24:48.089737 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 11:24:48.089750 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:24:48.089759 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:24:48.089769 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:24:48.089779 kernel: Performance Events: AMD PMU driver. Jan 29 11:24:48.089788 kernel: ... version: 0 Jan 29 11:24:48.089800 kernel: ... bit width: 48 Jan 29 11:24:48.089809 kernel: ... generic registers: 4 Jan 29 11:24:48.089819 kernel: ... value mask: 0000ffffffffffff Jan 29 11:24:48.089828 kernel: ... max period: 00007fffffffffff Jan 29 11:24:48.089837 kernel: ... fixed-purpose events: 0 Jan 29 11:24:48.089846 kernel: ... event mask: 000000000000000f Jan 29 11:24:48.089856 kernel: signal: max sigframe size: 1440 Jan 29 11:24:48.089865 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:24:48.089875 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:24:48.089887 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:24:48.089896 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:24:48.089906 kernel: .... node #0, CPUs: #1 Jan 29 11:24:48.089915 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:24:48.089924 kernel: smpboot: Max logical packages: 2 Jan 29 11:24:48.089934 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 11:24:48.089943 kernel: devtmpfs: initialized Jan 29 11:24:48.089953 kernel: x86/mm: Memory block size: 128MB Jan 29 11:24:48.089962 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:24:48.089972 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:24:48.089983 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:24:48.089992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:24:48.090002 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:24:48.090011 kernel: audit: type=2000 audit(1738149887.743:1): state=initialized audit_enabled=0 res=1 Jan 29 11:24:48.090021 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:24:48.090030 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:24:48.090039 kernel: cpuidle: using governor menu Jan 29 11:24:48.090049 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:24:48.090058 kernel: dca service started, version 1.12.1 Jan 29 11:24:48.090069 kernel: PCI: Using configuration type 1 for base access Jan 29 11:24:48.090079 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:24:48.090089 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:24:48.090098 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:24:48.090107 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:24:48.090117 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:24:48.090126 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:24:48.090136 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:24:48.090145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:24:48.090156 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:24:48.090166 kernel: ACPI: Interpreter enabled Jan 29 11:24:48.090175 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:24:48.090184 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:24:48.090194 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:24:48.090203 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:24:48.090212 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 11:24:48.090222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:24:48.094048 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:24:48.094180 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:24:48.094286 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:24:48.094302 kernel: acpiphp: Slot [3] registered Jan 29 11:24:48.094312 kernel: acpiphp: Slot [4] registered Jan 29 11:24:48.094322 kernel: acpiphp: Slot [5] registered Jan 29 11:24:48.094350 kernel: acpiphp: Slot [6] registered Jan 29 11:24:48.094360 kernel: acpiphp: Slot [7] registered Jan 29 11:24:48.094373 kernel: acpiphp: Slot [8] registered Jan 29 11:24:48.094383 kernel: acpiphp: Slot [9] registered Jan 29 11:24:48.094392 kernel: acpiphp: Slot [10] registered Jan 29 11:24:48.094401 kernel: acpiphp: Slot [11] registered Jan 29 11:24:48.094410 kernel: acpiphp: Slot [12] registered Jan 29 11:24:48.094420 kernel: acpiphp: Slot [13] registered Jan 29 11:24:48.094429 kernel: acpiphp: Slot [14] registered Jan 29 11:24:48.094438 kernel: acpiphp: Slot [15] registered Jan 29 11:24:48.094448 kernel: acpiphp: Slot [16] registered Jan 29 11:24:48.094459 kernel: acpiphp: Slot [17] registered Jan 29 11:24:48.094468 kernel: acpiphp: Slot [18] registered Jan 29 11:24:48.094477 kernel: acpiphp: Slot [19] registered Jan 29 11:24:48.094487 kernel: acpiphp: Slot [20] registered Jan 29 11:24:48.094496 kernel: acpiphp: Slot [21] registered Jan 29 11:24:48.094505 kernel: acpiphp: Slot [22] registered Jan 29 11:24:48.094515 kernel: acpiphp: Slot [23] registered Jan 29 11:24:48.094524 kernel: acpiphp: Slot [24] registered Jan 29 11:24:48.094533 kernel: acpiphp: Slot [25] registered Jan 29 11:24:48.094542 kernel: acpiphp: Slot [26] registered Jan 29 11:24:48.094554 kernel: acpiphp: Slot [27] registered Jan 29 11:24:48.094563 kernel: acpiphp: Slot [28] registered Jan 29 11:24:48.094572 kernel: acpiphp: Slot [29] registered Jan 29 11:24:48.094582 kernel: acpiphp: Slot [30] registered Jan 29 11:24:48.094591 kernel: acpiphp: Slot [31] registered Jan 29 11:24:48.094600 kernel: PCI host bridge to bus 0000:00 Jan 29 11:24:48.094702 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:24:48.094787 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:24:48.094873 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:24:48.094954 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:24:48.095036 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 11:24:48.095118 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:24:48.095229 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:24:48.095361 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:24:48.095477 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 11:24:48.095577 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 11:24:48.095674 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:24:48.095771 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:24:48.095898 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:24:48.095996 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:24:48.096114 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:24:48.096220 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 11:24:48.096317 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 11:24:48.096506 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 11:24:48.096605 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 11:24:48.096701 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 11:24:48.096797 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 11:24:48.096892 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 11:24:48.096990 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:24:48.097088 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:24:48.097180 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 11:24:48.097270 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 11:24:48.097379 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 11:24:48.097471 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 11:24:48.097569 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:24:48.097666 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:24:48.100398 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 11:24:48.100531 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 11:24:48.100637 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 11:24:48.100741 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 11:24:48.100840 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 11:24:48.100948 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:24:48.101060 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 11:24:48.101226 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 11:24:48.102343 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 11:24:48.102365 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:24:48.102377 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:24:48.102387 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:24:48.102397 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:24:48.102407 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:24:48.102423 kernel: iommu: Default domain type: Translated Jan 29 11:24:48.102433 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:24:48.102443 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:24:48.102453 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:24:48.102464 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:24:48.102474 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 11:24:48.102581 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 11:24:48.102678 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 11:24:48.102780 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:24:48.102795 kernel: vgaarb: loaded Jan 29 11:24:48.102805 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:24:48.102815 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:24:48.102826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:24:48.102836 kernel: pnp: PnP ACPI init Jan 29 11:24:48.102939 kernel: pnp 00:03: [dma 2] Jan 29 11:24:48.102956 kernel: pnp: PnP ACPI: found 5 devices Jan 29 11:24:48.102967 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:24:48.102983 kernel: NET: Registered PF_INET protocol family Jan 29 11:24:48.102993 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:24:48.103003 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:24:48.103013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:24:48.103023 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:24:48.103034 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:24:48.103044 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:24:48.103054 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:24:48.103068 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:24:48.103078 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:24:48.103088 kernel: NET: Registered PF_XDP protocol family Jan 29 11:24:48.103175 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:24:48.103260 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:24:48.105380 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:24:48.105466 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 11:24:48.105544 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 11:24:48.105642 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 11:24:48.105741 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:24:48.105756 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:24:48.105765 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:24:48.105775 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 11:24:48.105785 kernel: Initialise system trusted keyrings Jan 29 11:24:48.105795 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:24:48.105804 kernel: Key type asymmetric registered Jan 29 11:24:48.105813 kernel: Asymmetric key parser 'x509' registered Jan 29 11:24:48.105827 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:24:48.105837 kernel: io scheduler mq-deadline registered Jan 29 11:24:48.105846 kernel: io scheduler kyber registered Jan 29 11:24:48.105855 kernel: io scheduler bfq registered Jan 29 11:24:48.105865 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:24:48.105875 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 11:24:48.105889 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:24:48.105899 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:24:48.105908 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:24:48.105921 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:24:48.105930 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:24:48.105940 kernel: random: crng init done Jan 29 11:24:48.105949 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:24:48.105959 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:24:48.105969 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:24:48.106061 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:24:48.106077 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:24:48.106156 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:24:48.106244 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:24:47 UTC (1738149887) Jan 29 11:24:48.106357 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:24:48.106373 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:24:48.106382 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:24:48.106392 kernel: Segment Routing with IPv6 Jan 29 11:24:48.106401 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:24:48.106411 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:24:48.106420 kernel: Key type dns_resolver registered Jan 29 11:24:48.106433 kernel: IPI shorthand broadcast: enabled Jan 29 11:24:48.106443 kernel: sched_clock: Marking stable (1020007731, 172243580)->(1233638669, -41387358) Jan 29 11:24:48.106453 kernel: registered taskstats version 1 Jan 29 11:24:48.106462 kernel: Loading compiled-in X.509 certificates Jan 29 11:24:48.106472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 29 11:24:48.106481 kernel: Key type .fscrypt registered Jan 29 11:24:48.106490 kernel: Key type fscrypt-provisioning registered Jan 29 11:24:48.106500 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:24:48.106509 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:24:48.106521 kernel: ima: No architecture policies found Jan 29 11:24:48.106530 kernel: clk: Disabling unused clocks Jan 29 11:24:48.106540 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:24:48.106549 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:24:48.106559 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:24:48.106568 kernel: Run /init as init process Jan 29 11:24:48.106577 kernel: with arguments: Jan 29 11:24:48.106586 kernel: /init Jan 29 11:24:48.106596 kernel: with environment: Jan 29 11:24:48.106607 kernel: HOME=/ Jan 29 11:24:48.106616 kernel: TERM=linux Jan 29 11:24:48.106625 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:24:48.106638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:24:48.106650 systemd[1]: Detected virtualization kvm. Jan 29 11:24:48.106661 systemd[1]: Detected architecture x86-64. Jan 29 11:24:48.106672 systemd[1]: Running in initrd. Jan 29 11:24:48.106684 systemd[1]: No hostname configured, using default hostname. Jan 29 11:24:48.106694 systemd[1]: Hostname set to . Jan 29 11:24:48.106705 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:24:48.106721 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:24:48.106754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:24:48.106792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:24:48.106832 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:24:48.106909 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:24:48.106955 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:24:48.106991 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:24:48.107036 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:24:48.107071 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:24:48.107110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:24:48.107153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:24:48.107195 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:24:48.107230 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:24:48.107270 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:24:48.107305 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:24:48.108370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:24:48.108383 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:24:48.108394 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:24:48.108409 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:24:48.108420 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:24:48.108430 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:24:48.108441 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:24:48.108451 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:24:48.108462 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:24:48.108472 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:24:48.108482 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:24:48.108493 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:24:48.108505 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:24:48.108516 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:24:48.108527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:48.108537 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:24:48.108547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:24:48.108558 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:24:48.108597 systemd-journald[185]: Collecting audit messages is disabled. Jan 29 11:24:48.108624 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:24:48.108638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:24:48.108648 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:24:48.108659 systemd-journald[185]: Journal started Jan 29 11:24:48.108686 systemd-journald[185]: Runtime Journal (/run/log/journal/d1bcaa41435a41279fe1e44c4a1b39cd) is 8.0M, max 78.3M, 70.3M free. Jan 29 11:24:48.068444 systemd-modules-load[186]: Inserted module 'overlay' Jan 29 11:24:48.149692 kernel: Bridge firewalling registered Jan 29 11:24:48.149716 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:24:48.110057 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 29 11:24:48.150472 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:24:48.151303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:48.161462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:24:48.164450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:24:48.165567 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:24:48.170496 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:24:48.181528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:24:48.188558 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:24:48.189857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:24:48.191314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:24:48.202760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:24:48.214244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:24:48.219355 dracut-cmdline[217]: dracut-dracut-053 Jan 29 11:24:48.220608 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:24:48.250779 systemd-resolved[224]: Positive Trust Anchors: Jan 29 11:24:48.251580 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:24:48.251626 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:24:48.255125 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 29 11:24:48.256087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:24:48.257004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:24:48.300432 kernel: SCSI subsystem initialized Jan 29 11:24:48.312390 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:24:48.324389 kernel: iscsi: registered transport (tcp) Jan 29 11:24:48.348625 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:24:48.348714 kernel: QLogic iSCSI HBA Driver Jan 29 11:24:48.411046 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:24:48.419609 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:24:48.456045 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:24:48.456178 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:24:48.456849 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:24:48.520633 kernel: raid6: sse2x4 gen() 5107 MB/s Jan 29 11:24:48.538440 kernel: raid6: sse2x2 gen() 10930 MB/s Jan 29 11:24:48.557122 kernel: raid6: sse2x1 gen() 9430 MB/s Jan 29 11:24:48.557187 kernel: raid6: using algorithm sse2x2 gen() 10930 MB/s Jan 29 11:24:48.575988 kernel: raid6: .... xor() 9415 MB/s, rmw enabled Jan 29 11:24:48.576054 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 11:24:48.603394 kernel: xor: measuring software checksum speed Jan 29 11:24:48.603484 kernel: prefetch64-sse : 15978 MB/sec Jan 29 11:24:48.606027 kernel: generic_sse : 15597 MB/sec Jan 29 11:24:48.606087 kernel: xor: using function: prefetch64-sse (15978 MB/sec) Jan 29 11:24:48.778403 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:24:48.793098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:24:48.801616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:24:48.842631 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jan 29 11:24:48.853293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:24:48.864080 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:24:48.887214 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 29 11:24:48.920786 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:24:48.932517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:24:48.991425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:24:48.997512 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:24:49.017986 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:24:49.019893 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:24:49.022036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:24:49.023395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:24:49.029517 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:24:49.051420 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:24:49.067492 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 11:24:49.119372 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 11:24:49.119518 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:24:49.119535 kernel: GPT:17805311 != 20971519 Jan 29 11:24:49.119548 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:24:49.119562 kernel: GPT:17805311 != 20971519 Jan 29 11:24:49.119574 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:24:49.119586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:24:49.119599 kernel: libata version 3.00 loaded. Jan 29 11:24:49.095437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:24:49.095581 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:24:49.096301 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:24:49.096862 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:24:49.096982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:49.097601 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:49.116656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:24:49.127660 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 11:24:49.149987 kernel: scsi host0: ata_piix Jan 29 11:24:49.150135 kernel: scsi host1: ata_piix Jan 29 11:24:49.150249 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 11:24:49.150264 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 11:24:49.164356 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (448) Jan 29 11:24:49.176365 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Jan 29 11:24:49.180048 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:24:49.203576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:24:49.209579 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:24:49.210110 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:24:49.216151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:24:49.225964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:24:49.231505 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:24:49.234287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:24:49.246719 disk-uuid[504]: Primary Header is updated. Jan 29 11:24:49.246719 disk-uuid[504]: Secondary Entries is updated. Jan 29 11:24:49.246719 disk-uuid[504]: Secondary Header is updated. Jan 29 11:24:49.259360 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:24:49.263596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:24:50.275425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:24:50.278004 disk-uuid[508]: The operation has completed successfully. Jan 29 11:24:50.362721 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:24:50.363011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:24:50.399639 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:24:50.418232 sh[524]: Success Jan 29 11:24:50.452404 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 11:24:50.537724 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:24:50.539952 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:24:50.547485 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:24:50.574534 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 29 11:24:50.574602 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:50.579135 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:24:50.583837 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:24:50.589663 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:24:50.612989 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:24:50.615583 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:24:50.622684 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:24:50.633934 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:24:50.657781 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:50.657912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:50.659518 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:24:50.666452 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:24:50.680168 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:24:50.682465 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:50.693598 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:24:50.700641 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:24:50.767859 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:24:50.774573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:24:50.797878 systemd-networkd[706]: lo: Link UP Jan 29 11:24:50.798723 systemd-networkd[706]: lo: Gained carrier Jan 29 11:24:50.800555 systemd-networkd[706]: Enumeration completed Jan 29 11:24:50.800689 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:24:50.801407 systemd[1]: Reached target network.target - Network. Jan 29 11:24:50.804396 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:24:50.804400 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:24:50.806833 systemd-networkd[706]: eth0: Link UP Jan 29 11:24:50.806837 systemd-networkd[706]: eth0: Gained carrier Jan 29 11:24:50.806846 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:24:50.820404 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.109/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 11:24:50.865953 ignition[619]: Ignition 2.20.0 Jan 29 11:24:50.865967 ignition[619]: Stage: fetch-offline Jan 29 11:24:50.867604 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:24:50.866015 ignition[619]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:50.866026 ignition[619]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:24:50.866144 ignition[619]: parsed url from cmdline: "" Jan 29 11:24:50.866148 ignition[619]: no config URL provided Jan 29 11:24:50.866154 ignition[619]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:24:50.866165 ignition[619]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:24:50.866171 ignition[619]: failed to fetch config: resource requires networking Jan 29 11:24:50.866470 ignition[619]: Ignition finished successfully Jan 29 11:24:50.875530 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:24:50.888692 ignition[716]: Ignition 2.20.0 Jan 29 11:24:50.888705 ignition[716]: Stage: fetch Jan 29 11:24:50.888876 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:50.888889 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:24:50.888981 ignition[716]: parsed url from cmdline: "" Jan 29 11:24:50.888984 ignition[716]: no config URL provided Jan 29 11:24:50.888990 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:24:50.888998 ignition[716]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:24:50.889081 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 11:24:50.889289 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 11:24:50.889317 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 11:24:51.079507 ignition[716]: GET result: OK Jan 29 11:24:51.079629 ignition[716]: parsing config with SHA512: 0262de89d60c2895b5476af18f46cc824085fa9e130a2b5a6bc77895d84cf002a62eb9eb17a46b6ead2bd6333b0c16b7f6651516772b4fc94a18727c9130eec5 Jan 29 11:24:51.086893 unknown[716]: fetched base config from "system" Jan 29 11:24:51.086920 unknown[716]: fetched base config from "system" Jan 29 11:24:51.087755 ignition[716]: fetch: fetch complete Jan 29 11:24:51.086935 unknown[716]: fetched user config from "openstack" Jan 29 11:24:51.087767 ignition[716]: fetch: fetch passed Jan 29 11:24:51.092205 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:24:51.087859 ignition[716]: Ignition finished successfully Jan 29 11:24:51.102963 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:24:51.139573 ignition[722]: Ignition 2.20.0 Jan 29 11:24:51.139592 ignition[722]: Stage: kargs Jan 29 11:24:51.139993 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:51.140017 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:24:51.141760 ignition[722]: kargs: kargs passed Jan 29 11:24:51.145525 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:24:51.141854 ignition[722]: Ignition finished successfully Jan 29 11:24:51.162779 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:24:51.190898 ignition[728]: Ignition 2.20.0 Jan 29 11:24:51.190925 ignition[728]: Stage: disks Jan 29 11:24:51.191374 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:51.191407 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:24:51.195566 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:24:51.193168 ignition[728]: disks: disks passed Jan 29 11:24:51.198950 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:24:51.193261 ignition[728]: Ignition finished successfully Jan 29 11:24:51.200647 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:24:51.203646 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:24:51.206044 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:24:51.208990 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:24:51.218606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:24:51.256678 systemd-fsck[736]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:24:51.267324 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:24:51.273619 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:24:51.435388 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 29 11:24:51.435600 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:24:51.436712 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:24:51.444541 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:24:51.448594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:24:51.451991 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:24:51.455028 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 11:24:51.456231 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:24:51.474853 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (744) Jan 29 11:24:51.474904 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:51.474936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:51.474966 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:24:51.475008 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:24:51.456262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:24:51.459160 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:24:51.482829 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:24:51.494565 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:24:51.625203 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:24:51.634855 initrd-setup-root[779]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:24:51.644753 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:24:51.651541 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:24:51.785691 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:24:51.795560 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:24:51.804833 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:24:51.824107 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:24:51.832096 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:51.864546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:24:51.870117 ignition[862]: INFO : Ignition 2.20.0 Jan 29 11:24:51.870117 ignition[862]: INFO : Stage: mount Jan 29 11:24:51.870117 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:51.870117 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:24:51.873171 ignition[862]: INFO : mount: mount passed Jan 29 11:24:51.873171 ignition[862]: INFO : Ignition finished successfully Jan 29 11:24:51.872474 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:24:52.752115 systemd-networkd[706]: eth0: Gained IPv6LL Jan 29 11:24:58.702905 coreos-metadata[746]: Jan 29 11:24:58.702 WARN failed to locate config-drive, using the metadata service API instead Jan 29 11:24:58.743458 coreos-metadata[746]: Jan 29 11:24:58.743 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 11:24:58.759788 coreos-metadata[746]: Jan 29 11:24:58.759 INFO Fetch successful Jan 29 11:24:58.761362 coreos-metadata[746]: Jan 29 11:24:58.759 INFO wrote hostname ci-4186-1-0-f-29d8fa4ded.novalocal to /sysroot/etc/hostname Jan 29 11:24:58.764916 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 11:24:58.765172 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 11:24:58.778562 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:24:58.812770 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:24:58.840413 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (877) Jan 29 11:24:58.848161 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:24:58.848225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:24:58.854751 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:24:58.864390 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:24:58.869916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:24:58.915282 ignition[895]: INFO : Ignition 2.20.0 Jan 29 11:24:58.915282 ignition[895]: INFO : Stage: files Jan 29 11:24:58.918612 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:24:58.918612 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:24:58.918612 ignition[895]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:24:58.924104 ignition[895]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:24:58.924104 ignition[895]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:24:58.924104 ignition[895]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:24:58.924104 ignition[895]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:24:58.924104 ignition[895]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:24:58.923362 unknown[895]: wrote ssh authorized keys file for user: core Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:24:58.935040 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:24:59.495457 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:25:01.055146 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:25:01.059569 ignition[895]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:25:01.059569 ignition[895]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:25:01.059569 ignition[895]: INFO : files: files passed Jan 29 11:25:01.059569 ignition[895]: INFO : Ignition finished successfully Jan 29 11:25:01.057572 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:25:01.068968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:25:01.072041 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:25:01.073130 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:25:01.073207 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:25:01.096790 initrd-setup-root-after-ignition[924]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:25:01.096790 initrd-setup-root-after-ignition[924]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:25:01.101947 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:25:01.099928 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:25:01.103051 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:25:01.114529 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:25:01.144963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:25:01.145191 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:25:01.147522 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:25:01.149074 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:25:01.151023 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:25:01.156673 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:25:01.172967 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:25:01.181645 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:25:01.202882 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:25:01.206417 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:25:01.208014 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:25:01.209859 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:25:01.210144 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:25:01.212890 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:25:01.214509 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:25:01.216251 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:25:01.218043 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:25:01.220125 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:25:01.224890 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:25:01.225991 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:25:01.227158 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:25:01.228239 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:25:01.229292 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:25:01.230155 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:25:01.230311 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:25:01.231561 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:25:01.232479 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:25:01.233511 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:25:01.233792 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:25:01.234677 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:25:01.234812 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:25:01.236151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:25:01.236273 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:25:01.237614 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:25:01.237719 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:25:01.244507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:25:01.245027 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:25:01.245151 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:25:01.248506 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:25:01.248999 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:25:01.249124 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:25:01.249875 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:25:01.250009 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:25:01.258749 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:25:01.259376 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:25:01.265961 ignition[949]: INFO : Ignition 2.20.0 Jan 29 11:25:01.265961 ignition[949]: INFO : Stage: umount Jan 29 11:25:01.269535 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:25:01.269535 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 11:25:01.269535 ignition[949]: INFO : umount: umount passed Jan 29 11:25:01.269535 ignition[949]: INFO : Ignition finished successfully Jan 29 11:25:01.268249 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:25:01.268548 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:25:01.271678 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:25:01.271720 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:25:01.272201 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:25:01.272238 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:25:01.274409 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:25:01.274448 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:25:01.274936 systemd[1]: Stopped target network.target - Network. Jan 29 11:25:01.275391 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:25:01.275434 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:25:01.276006 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:25:01.278445 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:25:01.282498 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:25:01.283371 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:25:01.283918 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:25:01.284419 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:25:01.284452 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:25:01.284919 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:25:01.284950 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:25:01.285444 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:25:01.285481 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:25:01.285953 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:25:01.285989 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:25:01.288595 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:25:01.290216 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:25:01.294677 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:25:01.297439 systemd-networkd[706]: eth0: DHCPv6 lease lost Jan 29 11:25:01.298490 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:25:01.298598 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:25:01.301101 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:25:01.301228 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:25:01.305249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:25:01.305315 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:25:01.313771 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:25:01.316749 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:25:01.316808 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:25:01.319064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:25:01.319112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:25:01.320901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:25:01.320944 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:25:01.323169 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:25:01.323212 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:25:01.325102 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:25:01.336791 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:25:01.336943 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:25:01.339527 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:25:01.339605 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:25:01.342414 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:25:01.342474 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:25:01.343839 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:25:01.343870 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:25:01.345283 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:25:01.345346 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:25:01.346906 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:25:01.346948 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:25:01.348004 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:25:01.348044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:25:01.356469 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:25:01.360635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:25:01.360692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:25:01.364208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:25:01.364251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:25:01.366790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:25:01.366890 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:25:01.478863 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:25:01.479108 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:25:01.482193 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:25:01.484297 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:25:01.484492 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:25:01.494674 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:25:01.528825 systemd[1]: Switching root. Jan 29 11:25:01.570752 systemd-journald[185]: Journal stopped Jan 29 11:25:03.075927 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 29 11:25:03.075990 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:25:03.076027 kernel: SELinux: policy capability open_perms=1 Jan 29 11:25:03.076047 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:25:03.076064 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:25:03.076087 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:25:03.076113 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:25:03.076131 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:25:03.076146 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:25:03.076161 kernel: audit: type=1403 audit(1738149901.973:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:25:03.076176 systemd[1]: Successfully loaded SELinux policy in 72.359ms. Jan 29 11:25:03.079415 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.609ms. Jan 29 11:25:03.079459 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:25:03.079483 systemd[1]: Detected virtualization kvm. Jan 29 11:25:03.079502 systemd[1]: Detected architecture x86-64. Jan 29 11:25:03.079519 systemd[1]: Detected first boot. Jan 29 11:25:03.079537 systemd[1]: Hostname set to . Jan 29 11:25:03.079553 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:25:03.079572 zram_generator::config[992]: No configuration found. Jan 29 11:25:03.079590 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:25:03.079605 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:25:03.079621 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:25:03.079636 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:25:03.079653 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:25:03.079668 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:25:03.079684 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:25:03.079705 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:25:03.079721 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:25:03.079737 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:25:03.079752 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:25:03.079767 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:25:03.079785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:25:03.079801 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:25:03.079816 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:25:03.079831 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:25:03.079849 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:25:03.079865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:25:03.079880 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:25:03.079895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:25:03.079910 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:25:03.079925 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:25:03.079943 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:25:03.079959 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:25:03.079974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:25:03.079989 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:25:03.080006 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:25:03.080022 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:25:03.080038 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:25:03.080054 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:25:03.080071 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:25:03.080087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:25:03.080107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:25:03.080125 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:25:03.080141 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:25:03.080161 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:25:03.080178 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:25:03.080194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:25:03.080211 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:25:03.080227 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:25:03.080248 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:25:03.080264 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:25:03.080281 systemd[1]: Reached target machines.target - Containers. Jan 29 11:25:03.080297 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:25:03.080314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:25:03.080352 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:25:03.080370 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:25:03.080386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:25:03.080407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:25:03.080425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:25:03.080439 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:25:03.080452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:25:03.080466 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:25:03.080480 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:25:03.080498 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:25:03.080511 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:25:03.080525 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:25:03.080541 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:25:03.080555 kernel: fuse: init (API version 7.39) Jan 29 11:25:03.080570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:25:03.080584 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:25:03.080597 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:25:03.080610 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:25:03.080624 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:25:03.080638 systemd[1]: Stopped verity-setup.service. Jan 29 11:25:03.080652 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:25:03.080668 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:25:03.080681 kernel: ACPI: bus type drm_connector registered Jan 29 11:25:03.080695 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:25:03.080709 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:25:03.080722 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:25:03.080738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:25:03.080752 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:25:03.080765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:25:03.080811 systemd-journald[1085]: Collecting audit messages is disabled. Jan 29 11:25:03.080841 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:25:03.080855 systemd-journald[1085]: Journal started Jan 29 11:25:03.080884 systemd-journald[1085]: Runtime Journal (/run/log/journal/d1bcaa41435a41279fe1e44c4a1b39cd) is 8.0M, max 78.3M, 70.3M free. Jan 29 11:25:02.710433 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:25:03.084196 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:25:03.084226 kernel: loop: module loaded Jan 29 11:25:02.731304 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:25:02.731718 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:25:03.086461 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:25:03.087561 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:25:03.088269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:25:03.088410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:25:03.089102 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:25:03.089213 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:25:03.089970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:25:03.090093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:25:03.090888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:25:03.091008 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:25:03.091787 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:25:03.091920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:25:03.092854 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:25:03.093567 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:25:03.094286 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:25:03.105165 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:25:03.112489 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:25:03.121422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:25:03.123408 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:25:03.123450 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:25:03.126224 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:25:03.132887 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:25:03.139177 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:25:03.139862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:25:03.143571 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:25:03.148466 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:25:03.149056 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:25:03.154486 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:25:03.156260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:25:03.158431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:25:03.166182 systemd-journald[1085]: Time spent on flushing to /var/log/journal/d1bcaa41435a41279fe1e44c4a1b39cd is 52.743ms for 922 entries. Jan 29 11:25:03.166182 systemd-journald[1085]: System Journal (/var/log/journal/d1bcaa41435a41279fe1e44c4a1b39cd) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:25:03.279592 systemd-journald[1085]: Received client request to flush runtime journal. Jan 29 11:25:03.279656 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:25:03.165566 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:25:03.169879 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:25:03.172178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:25:03.172924 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:25:03.173611 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:25:03.174908 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:25:03.176206 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:25:03.182574 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:25:03.190487 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:25:03.197512 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:25:03.231194 udevadm[1134]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:25:03.242980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:25:03.281018 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:25:03.300123 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:25:03.303689 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:25:03.337603 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:25:03.336065 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:25:03.347584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:25:03.358788 kernel: loop1: detected capacity change from 0 to 8 Jan 29 11:25:03.385915 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 11:25:03.397977 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 29 11:25:03.398001 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 29 11:25:03.406571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:25:03.447365 kernel: loop3: detected capacity change from 0 to 141000 Jan 29 11:25:03.579378 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 11:25:03.631553 kernel: loop5: detected capacity change from 0 to 8 Jan 29 11:25:03.634440 kernel: loop6: detected capacity change from 0 to 205544 Jan 29 11:25:03.686480 kernel: loop7: detected capacity change from 0 to 141000 Jan 29 11:25:03.746731 (sd-merge)[1150]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 11:25:03.747217 (sd-merge)[1150]: Merged extensions into '/usr'. Jan 29 11:25:03.767622 systemd[1]: Reloading requested from client PID 1125 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:25:03.767654 systemd[1]: Reloading... Jan 29 11:25:03.916373 zram_generator::config[1173]: No configuration found. Jan 29 11:25:04.091831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:25:04.152729 ldconfig[1120]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:25:04.155973 systemd[1]: Reloading finished in 387 ms. Jan 29 11:25:04.183080 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:25:04.184168 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:25:04.185196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:25:04.194548 systemd[1]: Starting ensure-sysext.service... Jan 29 11:25:04.197466 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:25:04.201724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:25:04.225527 systemd[1]: Reloading requested from client PID 1233 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:25:04.225547 systemd[1]: Reloading... Jan 29 11:25:04.229212 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:25:04.229894 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:25:04.230757 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:25:04.231049 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 29 11:25:04.231117 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 29 11:25:04.241156 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:25:04.241169 systemd-tmpfiles[1234]: Skipping /boot Jan 29 11:25:04.261272 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Jan 29 11:25:04.264297 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:25:04.264311 systemd-tmpfiles[1234]: Skipping /boot Jan 29 11:25:04.329350 zram_generator::config[1280]: No configuration found. Jan 29 11:25:04.444378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1274) Jan 29 11:25:04.480660 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:25:04.506395 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 11:25:04.536005 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:25:04.545542 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:25:04.592121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:25:04.600484 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 11:25:04.600534 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 11:25:04.601626 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:25:04.603466 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:25:04.603508 kernel: [drm] features: -context_init Jan 29 11:25:04.604916 kernel: [drm] number of scanouts: 1 Jan 29 11:25:04.605573 kernel: [drm] number of cap sets: 0 Jan 29 11:25:04.609357 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 11:25:04.611369 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:25:04.616372 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 11:25:04.623358 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:25:04.631361 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:25:04.670188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:25:04.672595 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:25:04.673170 systemd[1]: Reloading finished in 447 ms. Jan 29 11:25:04.688757 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:25:04.689778 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:25:04.734559 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:25:04.744037 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:25:04.749678 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:25:04.749968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:25:04.754649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:25:04.756907 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:25:04.758268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:25:04.766615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:25:04.766850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:25:04.768831 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:25:04.770561 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:25:04.780637 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:25:04.786662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:25:04.791809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:25:04.794614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:25:04.795571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:25:04.798462 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:25:04.811664 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:25:04.814460 systemd[1]: Finished ensure-sysext.service. Jan 29 11:25:04.833570 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:25:04.837607 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:25:04.840959 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:25:04.841790 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:25:04.843545 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:25:04.843888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:25:04.844766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:25:04.845546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:25:04.854724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:25:04.854876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:25:04.868725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:25:04.868889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:25:04.871857 lvm[1368]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:25:04.876931 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:25:04.889164 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:25:04.914752 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:25:04.919153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:25:04.927551 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:25:04.930417 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:25:04.940928 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:25:04.941298 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:25:04.954655 augenrules[1398]: No rules Jan 29 11:25:04.960267 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:25:04.960451 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:25:04.972362 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:25:04.977930 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:25:04.980346 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:25:04.998744 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:25:05.002940 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:25:05.039709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:25:05.074806 systemd-networkd[1358]: lo: Link UP Jan 29 11:25:05.074816 systemd-networkd[1358]: lo: Gained carrier Jan 29 11:25:05.075939 systemd-networkd[1358]: Enumeration completed Jan 29 11:25:05.076021 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:25:05.080982 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:25:05.080994 systemd-networkd[1358]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:25:05.081692 systemd-networkd[1358]: eth0: Link UP Jan 29 11:25:05.081701 systemd-networkd[1358]: eth0: Gained carrier Jan 29 11:25:05.081716 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:25:05.083488 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:25:05.091033 systemd-resolved[1361]: Positive Trust Anchors: Jan 29 11:25:05.093367 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:25:05.093412 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:25:05.094396 systemd-networkd[1358]: eth0: DHCPv4 address 172.24.4.109/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 11:25:05.103812 systemd-resolved[1361]: Using system hostname 'ci-4186-1-0-f-29d8fa4ded.novalocal'. Jan 29 11:25:05.105684 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:25:05.106694 systemd[1]: Reached target network.target - Network. Jan 29 11:25:05.107166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:25:05.111455 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:25:05.112245 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:25:05.115451 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:25:05.117625 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:25:05.119305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:25:05.120586 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:25:05.120621 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:25:05.121631 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:25:05.123818 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:25:05.126203 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:25:05.128524 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:25:05.133271 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:25:05.137450 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:25:05.143727 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:25:06.324502 systemd-resolved[1361]: Clock change detected. Flushing caches. Jan 29 11:25:06.324632 systemd-timesyncd[1371]: Contacted time server 95.81.173.74:123 (0.flatcar.pool.ntp.org). Jan 29 11:25:06.324692 systemd-timesyncd[1371]: Initial clock synchronization to Wed 2025-01-29 11:25:06.324463 UTC. Jan 29 11:25:06.326462 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:25:06.328719 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:25:06.329423 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:25:06.330556 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:25:06.330598 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:25:06.341529 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:25:06.347554 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:25:06.360853 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:25:06.367229 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:25:06.373048 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:25:06.374006 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:25:06.380118 jq[1427]: false Jan 29 11:25:06.383472 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:25:06.390695 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:25:06.396509 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:25:06.407751 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:25:06.410835 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:25:06.420958 extend-filesystems[1428]: Found loop4 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found loop5 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found loop6 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found loop7 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda1 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda2 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda3 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found usr Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda4 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda6 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda7 Jan 29 11:25:06.420958 extend-filesystems[1428]: Found vda9 Jan 29 11:25:06.420958 extend-filesystems[1428]: Checking size of /dev/vda9 Jan 29 11:25:06.411370 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:25:06.448910 dbus-daemon[1424]: [system] SELinux support is enabled Jan 29 11:25:06.424579 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:25:06.487818 extend-filesystems[1428]: Resized partition /dev/vda9 Jan 29 11:25:06.431569 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:25:06.488698 jq[1445]: true Jan 29 11:25:06.449369 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:25:06.502687 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:25:06.503480 update_engine[1440]: I20250129 11:25:06.486856 1440 main.cc:92] Flatcar Update Engine starting Jan 29 11:25:06.465011 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:25:06.465191 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:25:06.465506 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:25:06.466143 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:25:06.477250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:25:06.477991 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:25:06.507543 update_engine[1440]: I20250129 11:25:06.504245 1440 update_check_scheduler.cc:74] Next update check in 9m28s Jan 29 11:25:06.513488 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 11:25:06.512641 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:25:06.512671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:25:06.515676 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:25:06.515700 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:25:06.519047 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:25:06.530988 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:25:06.571259 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 11:25:06.571344 jq[1449]: true Jan 29 11:25:06.548408 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:25:06.575644 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:25:06.575644 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:25:06.575644 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 11:25:06.554755 systemd-logind[1433]: New seat seat0. Jan 29 11:25:06.591213 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Jan 29 11:25:06.571412 systemd-logind[1433]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:25:06.571444 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:25:06.578426 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:25:06.595855 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:25:06.596564 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:25:06.628121 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1268) Jan 29 11:25:06.639938 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:25:06.664937 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:25:06.671626 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:25:06.691003 systemd[1]: Starting sshkeys.service... Jan 29 11:25:06.705793 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:25:06.737883 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:25:06.804648 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:25:06.908274 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:25:06.934156 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:25:06.945798 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:25:06.949167 containerd[1452]: time="2025-01-29T11:25:06.949108864Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:25:06.951913 systemd[1]: Started sshd@0-172.24.4.109:22-172.24.4.1:50458.service - OpenSSH per-connection server daemon (172.24.4.1:50458). Jan 29 11:25:06.963661 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:25:06.963834 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:25:06.974911 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:25:06.989670 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:25:07.002924 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:25:07.019817 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:25:07.023580 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:25:07.028610 containerd[1452]: time="2025-01-29T11:25:07.028546299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:25:07.030757 containerd[1452]: time="2025-01-29T11:25:07.030704998Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:25:07.030757 containerd[1452]: time="2025-01-29T11:25:07.030743240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:25:07.030855 containerd[1452]: time="2025-01-29T11:25:07.030772354Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:25:07.031029 containerd[1452]: time="2025-01-29T11:25:07.030989672Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:25:07.031063 containerd[1452]: time="2025-01-29T11:25:07.031025479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031159 containerd[1452]: time="2025-01-29T11:25:07.031118864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031159 containerd[1452]: time="2025-01-29T11:25:07.031151235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031431 containerd[1452]: time="2025-01-29T11:25:07.031376197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031478 containerd[1452]: time="2025-01-29T11:25:07.031431320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031478 containerd[1452]: time="2025-01-29T11:25:07.031458832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031545 containerd[1452]: time="2025-01-29T11:25:07.031476485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031628 containerd[1452]: time="2025-01-29T11:25:07.031601609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:25:07.031924 containerd[1452]: time="2025-01-29T11:25:07.031891213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:25:07.032045 containerd[1452]: time="2025-01-29T11:25:07.032013973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:25:07.032045 containerd[1452]: time="2025-01-29T11:25:07.032038619Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:25:07.032159 containerd[1452]: time="2025-01-29T11:25:07.032129840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:25:07.032215 containerd[1452]: time="2025-01-29T11:25:07.032195283Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:25:07.054381 containerd[1452]: time="2025-01-29T11:25:07.054320048Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:25:07.054381 containerd[1452]: time="2025-01-29T11:25:07.054376824Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:25:07.054381 containerd[1452]: time="2025-01-29T11:25:07.054412812Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:25:07.054640 containerd[1452]: time="2025-01-29T11:25:07.054432889Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:25:07.054640 containerd[1452]: time="2025-01-29T11:25:07.054449060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:25:07.054640 containerd[1452]: time="2025-01-29T11:25:07.054616774Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.054957984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055131600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055152499Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055169802Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055187835Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055203535Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055218082Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055233812Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055254350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055269258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055284256Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055297571Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055321075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055463 containerd[1452]: time="2025-01-29T11:25:07.055337206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055352204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055367172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055380567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055426212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055442222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055457842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055472960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055490283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055504058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055517794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055531400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055547530Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055568359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055583167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.055837 containerd[1452]: time="2025-01-29T11:25:07.055595350Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055644542Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055665090Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055677714Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055691300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055702721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055716797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055728509Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:25:07.056140 containerd[1452]: time="2025-01-29T11:25:07.055739410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:25:07.056311 containerd[1452]: time="2025-01-29T11:25:07.056041196Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:25:07.056311 containerd[1452]: time="2025-01-29T11:25:07.056094466Z" level=info msg="Connect containerd service" Jan 29 11:25:07.056311 containerd[1452]: time="2025-01-29T11:25:07.056127197Z" level=info msg="using legacy CRI server" Jan 29 11:25:07.056311 containerd[1452]: time="2025-01-29T11:25:07.056134952Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:25:07.056311 containerd[1452]: time="2025-01-29T11:25:07.056261088Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:25:07.056836 containerd[1452]: time="2025-01-29T11:25:07.056812322Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:25:07.057012 containerd[1452]: time="2025-01-29T11:25:07.056977502Z" level=info msg="Start subscribing containerd event" Jan 29 11:25:07.057137 containerd[1452]: time="2025-01-29T11:25:07.057085354Z" level=info msg="Start recovering state" Jan 29 11:25:07.057267 containerd[1452]: time="2025-01-29T11:25:07.057090824Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:25:07.057267 containerd[1452]: time="2025-01-29T11:25:07.057256776Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:25:07.057404 containerd[1452]: time="2025-01-29T11:25:07.057329733Z" level=info msg="Start event monitor" Jan 29 11:25:07.057404 containerd[1452]: time="2025-01-29T11:25:07.057357134Z" level=info msg="Start snapshots syncer" Jan 29 11:25:07.057404 containerd[1452]: time="2025-01-29T11:25:07.057368285Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:25:07.057404 containerd[1452]: time="2025-01-29T11:25:07.057376109Z" level=info msg="Start streaming server" Jan 29 11:25:07.058075 containerd[1452]: time="2025-01-29T11:25:07.057618875Z" level=info msg="containerd successfully booted in 0.109180s" Jan 29 11:25:07.057751 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:25:08.331666 systemd-networkd[1358]: eth0: Gained IPv6LL Jan 29 11:25:08.336985 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:25:08.344754 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:25:08.352553 sshd[1504]: Accepted publickey for core from 172.24.4.1 port 50458 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:08.356082 sshd-session[1504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:08.358132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:25:08.375076 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:25:08.424211 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:25:08.441577 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:25:08.456298 systemd-logind[1433]: New session 1 of user core. Jan 29 11:25:08.458241 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:25:08.468249 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:25:08.478856 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:25:08.488732 (systemd)[1530]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:25:08.600135 systemd[1530]: Queued start job for default target default.target. Jan 29 11:25:08.605365 systemd[1530]: Created slice app.slice - User Application Slice. Jan 29 11:25:08.605415 systemd[1530]: Reached target paths.target - Paths. Jan 29 11:25:08.605432 systemd[1530]: Reached target timers.target - Timers. Jan 29 11:25:08.607981 systemd[1530]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:25:08.640957 systemd[1530]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:25:08.641077 systemd[1530]: Reached target sockets.target - Sockets. Jan 29 11:25:08.641094 systemd[1530]: Reached target basic.target - Basic System. Jan 29 11:25:08.641215 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:25:08.645669 systemd[1530]: Reached target default.target - Main User Target. Jan 29 11:25:08.645745 systemd[1530]: Startup finished in 150ms. Jan 29 11:25:08.647614 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:25:08.985283 systemd[1]: Started sshd@1-172.24.4.109:22-172.24.4.1:37526.service - OpenSSH per-connection server daemon (172.24.4.1:37526). Jan 29 11:25:10.077058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:25:10.098872 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:25:10.299364 sshd[1541]: Accepted publickey for core from 172.24.4.1 port 37526 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:10.300969 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:10.309031 systemd-logind[1433]: New session 2 of user core. Jan 29 11:25:10.318886 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:25:10.876014 sshd[1555]: Connection closed by 172.24.4.1 port 37526 Jan 29 11:25:10.877033 sshd-session[1541]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:10.891659 systemd[1]: sshd@1-172.24.4.109:22-172.24.4.1:37526.service: Deactivated successfully. Jan 29 11:25:10.894090 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:25:10.895509 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:25:10.907500 systemd[1]: Started sshd@2-172.24.4.109:22-172.24.4.1:37534.service - OpenSSH per-connection server daemon (172.24.4.1:37534). Jan 29 11:25:10.915042 systemd-logind[1433]: Removed session 2. Jan 29 11:25:11.194567 kubelet[1550]: E0129 11:25:11.193742 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:25:11.198269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:25:11.198595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:25:11.199481 systemd[1]: kubelet.service: Consumed 1.726s CPU time. Jan 29 11:25:12.037066 agetty[1510]: failed to open credentials directory Jan 29 11:25:12.037828 agetty[1511]: failed to open credentials directory Jan 29 11:25:12.056692 login[1510]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:25:12.067932 systemd-logind[1433]: New session 3 of user core. Jan 29 11:25:12.078023 login[1511]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:25:12.078913 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:25:12.094893 systemd-logind[1433]: New session 4 of user core. Jan 29 11:25:12.102877 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:25:12.171283 sshd[1560]: Accepted publickey for core from 172.24.4.1 port 37534 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:12.173676 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:12.182508 systemd-logind[1433]: New session 5 of user core. Jan 29 11:25:12.191883 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:25:12.909565 sshd[1589]: Connection closed by 172.24.4.1 port 37534 Jan 29 11:25:12.910605 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:12.915741 systemd[1]: sshd@2-172.24.4.109:22-172.24.4.1:37534.service: Deactivated successfully. Jan 29 11:25:12.919560 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:25:12.922792 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:25:12.925168 systemd-logind[1433]: Removed session 5. Jan 29 11:25:13.431972 coreos-metadata[1423]: Jan 29 11:25:13.431 WARN failed to locate config-drive, using the metadata service API instead Jan 29 11:25:13.478874 coreos-metadata[1423]: Jan 29 11:25:13.478 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 11:25:13.673268 coreos-metadata[1423]: Jan 29 11:25:13.673 INFO Fetch successful Jan 29 11:25:13.673268 coreos-metadata[1423]: Jan 29 11:25:13.673 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 11:25:13.689561 coreos-metadata[1423]: Jan 29 11:25:13.688 INFO Fetch successful Jan 29 11:25:13.689561 coreos-metadata[1423]: Jan 29 11:25:13.689 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 11:25:13.704338 coreos-metadata[1423]: Jan 29 11:25:13.704 INFO Fetch successful Jan 29 11:25:13.704338 coreos-metadata[1423]: Jan 29 11:25:13.704 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 11:25:13.719785 coreos-metadata[1423]: Jan 29 11:25:13.719 INFO Fetch successful Jan 29 11:25:13.719785 coreos-metadata[1423]: Jan 29 11:25:13.719 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 11:25:13.734522 coreos-metadata[1423]: Jan 29 11:25:13.734 INFO Fetch successful Jan 29 11:25:13.734522 coreos-metadata[1423]: Jan 29 11:25:13.734 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 11:25:13.748351 coreos-metadata[1423]: Jan 29 11:25:13.748 INFO Fetch successful Jan 29 11:25:13.799680 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:25:13.801677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:25:13.840762 coreos-metadata[1482]: Jan 29 11:25:13.840 WARN failed to locate config-drive, using the metadata service API instead Jan 29 11:25:13.882199 coreos-metadata[1482]: Jan 29 11:25:13.882 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 11:25:13.897772 coreos-metadata[1482]: Jan 29 11:25:13.897 INFO Fetch successful Jan 29 11:25:13.897956 coreos-metadata[1482]: Jan 29 11:25:13.897 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 11:25:13.912103 coreos-metadata[1482]: Jan 29 11:25:13.911 INFO Fetch successful Jan 29 11:25:13.962196 unknown[1482]: wrote ssh authorized keys file for user: core Jan 29 11:25:14.010223 update-ssh-keys[1602]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:25:14.010923 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:25:14.013957 systemd[1]: Finished sshkeys.service. Jan 29 11:25:14.018715 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:25:14.019778 systemd[1]: Startup finished in 1.246s (kernel) + 14.150s (initrd) + 10.937s (userspace) = 26.335s. Jan 29 11:25:21.361938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:25:21.369792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:25:21.654696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:25:21.670008 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:25:21.796586 kubelet[1614]: E0129 11:25:21.796460 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:25:21.803867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:25:21.804254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:25:22.934014 systemd[1]: Started sshd@3-172.24.4.109:22-172.24.4.1:54800.service - OpenSSH per-connection server daemon (172.24.4.1:54800). Jan 29 11:25:24.091738 sshd[1623]: Accepted publickey for core from 172.24.4.1 port 54800 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:24.094170 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:24.105073 systemd-logind[1433]: New session 6 of user core. Jan 29 11:25:24.111765 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:25:24.731047 sshd[1625]: Connection closed by 172.24.4.1 port 54800 Jan 29 11:25:24.732119 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:24.743850 systemd[1]: sshd@3-172.24.4.109:22-172.24.4.1:54800.service: Deactivated successfully. Jan 29 11:25:24.746992 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:25:24.749979 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:25:24.756940 systemd[1]: Started sshd@4-172.24.4.109:22-172.24.4.1:35474.service - OpenSSH per-connection server daemon (172.24.4.1:35474). Jan 29 11:25:24.760000 systemd-logind[1433]: Removed session 6. Jan 29 11:25:25.942152 sshd[1630]: Accepted publickey for core from 172.24.4.1 port 35474 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:25.944791 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:25.955971 systemd-logind[1433]: New session 7 of user core. Jan 29 11:25:25.965721 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:25:26.737439 sshd[1632]: Connection closed by 172.24.4.1 port 35474 Jan 29 11:25:26.737992 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:26.753939 systemd[1]: sshd@4-172.24.4.109:22-172.24.4.1:35474.service: Deactivated successfully. Jan 29 11:25:26.757045 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:25:26.759899 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:25:26.769068 systemd[1]: Started sshd@5-172.24.4.109:22-172.24.4.1:35476.service - OpenSSH per-connection server daemon (172.24.4.1:35476). Jan 29 11:25:26.772254 systemd-logind[1433]: Removed session 7. Jan 29 11:25:27.947130 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 35476 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:27.949706 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:27.963000 systemd-logind[1433]: New session 8 of user core. Jan 29 11:25:27.974852 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:25:28.725436 sshd[1639]: Connection closed by 172.24.4.1 port 35476 Jan 29 11:25:28.724720 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:28.735384 systemd[1]: sshd@5-172.24.4.109:22-172.24.4.1:35476.service: Deactivated successfully. Jan 29 11:25:28.738606 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:25:28.741825 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:25:28.751102 systemd[1]: Started sshd@6-172.24.4.109:22-172.24.4.1:35484.service - OpenSSH per-connection server daemon (172.24.4.1:35484). Jan 29 11:25:28.753997 systemd-logind[1433]: Removed session 8. Jan 29 11:25:29.935064 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 35484 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:29.937717 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:29.946937 systemd-logind[1433]: New session 9 of user core. Jan 29 11:25:29.958663 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:25:30.431083 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:25:30.431818 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:25:30.449281 sudo[1647]: pam_unix(sudo:session): session closed for user root Jan 29 11:25:30.724591 sshd[1646]: Connection closed by 172.24.4.1 port 35484 Jan 29 11:25:30.724681 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:30.740989 systemd[1]: sshd@6-172.24.4.109:22-172.24.4.1:35484.service: Deactivated successfully. Jan 29 11:25:30.744245 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:25:30.746016 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:25:30.753948 systemd[1]: Started sshd@7-172.24.4.109:22-172.24.4.1:35500.service - OpenSSH per-connection server daemon (172.24.4.1:35500). Jan 29 11:25:30.756843 systemd-logind[1433]: Removed session 9. Jan 29 11:25:31.862058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:25:31.879525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:25:31.907432 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 35500 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:31.910363 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:31.921598 systemd-logind[1433]: New session 10 of user core. Jan 29 11:25:31.928734 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:25:32.213702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:25:32.217250 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:25:32.327917 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:25:32.328201 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:25:32.329139 kubelet[1663]: E0129 11:25:32.329087 1663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:25:32.332499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:25:32.332646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:25:32.334289 sudo[1671]: pam_unix(sudo:session): session closed for user root Jan 29 11:25:32.344459 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:25:32.345072 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:25:32.363771 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:25:32.399370 augenrules[1694]: No rules Jan 29 11:25:32.400468 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:25:32.400640 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:25:32.402049 sudo[1670]: pam_unix(sudo:session): session closed for user root Jan 29 11:25:32.544879 sshd[1657]: Connection closed by 172.24.4.1 port 35500 Jan 29 11:25:32.547383 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:32.559733 systemd[1]: sshd@7-172.24.4.109:22-172.24.4.1:35500.service: Deactivated successfully. Jan 29 11:25:32.563051 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:25:32.565026 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:25:32.571002 systemd[1]: Started sshd@8-172.24.4.109:22-172.24.4.1:35504.service - OpenSSH per-connection server daemon (172.24.4.1:35504). Jan 29 11:25:32.573621 systemd-logind[1433]: Removed session 10. Jan 29 11:25:33.716863 sshd[1702]: Accepted publickey for core from 172.24.4.1 port 35504 ssh2: RSA SHA256:oU87gZauU7ia4aLeEgyf9AIK9ZSEX9ZiyCRj04JXl88 Jan 29 11:25:33.719523 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:33.728799 systemd-logind[1433]: New session 11 of user core. Jan 29 11:25:33.744752 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:25:34.188728 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:25:34.189373 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:25:35.481032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:25:35.493957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:25:35.541128 systemd[1]: Reloading requested from client PID 1737 ('systemctl') (unit session-11.scope)... Jan 29 11:25:35.541142 systemd[1]: Reloading... Jan 29 11:25:35.633460 zram_generator::config[1774]: No configuration found. Jan 29 11:25:35.808741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:25:35.894136 systemd[1]: Reloading finished in 352 ms. Jan 29 11:25:35.945706 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:25:35.945782 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:25:35.946134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:25:35.948526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:25:36.069023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:25:36.087259 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:25:36.157923 kubelet[1840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:25:36.158245 kubelet[1840]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:25:36.158291 kubelet[1840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:25:36.183303 kubelet[1840]: I0129 11:25:36.182840 1840 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:25:36.419446 kubelet[1840]: I0129 11:25:36.419060 1840 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:25:36.419446 kubelet[1840]: I0129 11:25:36.419115 1840 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:25:36.420028 kubelet[1840]: I0129 11:25:36.419993 1840 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:25:36.473805 kubelet[1840]: I0129 11:25:36.473740 1840 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:25:36.502188 kubelet[1840]: E0129 11:25:36.501834 1840 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:25:36.502188 kubelet[1840]: I0129 11:25:36.501907 1840 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:25:36.512340 kubelet[1840]: I0129 11:25:36.512219 1840 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:25:36.512616 kubelet[1840]: I0129 11:25:36.512492 1840 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:25:36.512832 kubelet[1840]: I0129 11:25:36.512721 1840 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:25:36.513233 kubelet[1840]: I0129 11:25:36.512801 1840 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:25:36.513233 kubelet[1840]: I0129 11:25:36.513201 1840 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:25:36.513233 kubelet[1840]: I0129 11:25:36.513226 1840 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:25:36.513764 kubelet[1840]: I0129 11:25:36.513469 1840 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:25:36.516976 kubelet[1840]: I0129 11:25:36.516894 1840 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:25:36.516976 kubelet[1840]: I0129 11:25:36.516952 1840 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:25:36.517250 kubelet[1840]: I0129 11:25:36.517011 1840 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:25:36.517250 kubelet[1840]: I0129 11:25:36.517039 1840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:25:36.520214 kubelet[1840]: E0129 11:25:36.519035 1840 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:36.520214 kubelet[1840]: E0129 11:25:36.519272 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:36.530719 kubelet[1840]: I0129 11:25:36.530437 1840 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:25:36.535239 kubelet[1840]: I0129 11:25:36.534970 1840 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:25:36.537442 kubelet[1840]: W0129 11:25:36.536611 1840 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:25:36.538081 kubelet[1840]: I0129 11:25:36.538049 1840 server.go:1269] "Started kubelet" Jan 29 11:25:36.540861 kubelet[1840]: I0129 11:25:36.540807 1840 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:25:36.543156 kubelet[1840]: I0129 11:25:36.543120 1840 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:25:36.545098 kubelet[1840]: I0129 11:25:36.543135 1840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:25:36.555044 kubelet[1840]: I0129 11:25:36.555017 1840 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:25:36.557693 kubelet[1840]: I0129 11:25:36.557680 1840 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:25:36.558265 kubelet[1840]: I0129 11:25:36.558205 1840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:25:36.558540 kubelet[1840]: I0129 11:25:36.558527 1840 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:25:36.558730 kubelet[1840]: I0129 11:25:36.558717 1840 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:25:36.558852 kubelet[1840]: I0129 11:25:36.558841 1840 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:25:36.559534 kubelet[1840]: E0129 11:25:36.559480 1840 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.109\" not found" Jan 29 11:25:36.564567 kubelet[1840]: E0129 11:25:36.564421 1840 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:25:36.564942 kubelet[1840]: I0129 11:25:36.564873 1840 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:25:36.564942 kubelet[1840]: I0129 11:25:36.564886 1840 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:25:36.565124 kubelet[1840]: I0129 11:25:36.565068 1840 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:25:36.566906 kubelet[1840]: E0129 11:25:36.566849 1840 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.109\" not found" node="172.24.4.109" Jan 29 11:25:36.576128 kubelet[1840]: I0129 11:25:36.576096 1840 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:25:36.576506 kubelet[1840]: I0129 11:25:36.576263 1840 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:25:36.576506 kubelet[1840]: I0129 11:25:36.576282 1840 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:25:36.581885 kubelet[1840]: I0129 11:25:36.581870 1840 policy_none.go:49] "None policy: Start" Jan 29 11:25:36.582780 kubelet[1840]: I0129 11:25:36.582768 1840 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:25:36.582873 kubelet[1840]: I0129 11:25:36.582864 1840 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:25:36.596416 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:25:36.614752 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:25:36.619457 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:25:36.626373 kubelet[1840]: I0129 11:25:36.626347 1840 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:25:36.627879 kubelet[1840]: I0129 11:25:36.627074 1840 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:25:36.627879 kubelet[1840]: I0129 11:25:36.627090 1840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:25:36.627879 kubelet[1840]: I0129 11:25:36.627357 1840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:25:36.629635 kubelet[1840]: E0129 11:25:36.629566 1840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.109\" not found" Jan 29 11:25:36.653863 kubelet[1840]: I0129 11:25:36.653836 1840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:25:36.655208 kubelet[1840]: I0129 11:25:36.655136 1840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:25:36.655208 kubelet[1840]: I0129 11:25:36.655180 1840 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:25:36.655208 kubelet[1840]: I0129 11:25:36.655208 1840 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:25:36.655327 kubelet[1840]: E0129 11:25:36.655294 1840 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 11:25:36.729540 kubelet[1840]: I0129 11:25:36.729368 1840 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.109" Jan 29 11:25:36.744822 kubelet[1840]: I0129 11:25:36.744706 1840 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.109" Jan 29 11:25:36.869771 kubelet[1840]: I0129 11:25:36.869618 1840 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:25:36.871058 containerd[1452]: time="2025-01-29T11:25:36.870801742Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:25:36.871898 kubelet[1840]: I0129 11:25:36.871786 1840 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:25:37.112978 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 29 11:25:37.389610 sshd[1704]: Connection closed by 172.24.4.1 port 35504 Jan 29 11:25:37.390756 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:37.396469 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:25:37.398189 systemd[1]: sshd@8-172.24.4.109:22-172.24.4.1:35504.service: Deactivated successfully. Jan 29 11:25:37.402989 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:25:37.403363 systemd[1]: session-11.scope: Consumed 1.025s CPU time, 72.4M memory peak, 0B memory swap peak. Jan 29 11:25:37.407463 systemd-logind[1433]: Removed session 11. Jan 29 11:25:37.424491 kubelet[1840]: I0129 11:25:37.424005 1840 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:25:37.424491 kubelet[1840]: W0129 11:25:37.424321 1840 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:25:37.424491 kubelet[1840]: W0129 11:25:37.424434 1840 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:25:37.426329 kubelet[1840]: W0129 11:25:37.424931 1840 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:25:37.520103 kubelet[1840]: E0129 11:25:37.519986 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:37.520103 kubelet[1840]: I0129 11:25:37.520014 1840 apiserver.go:52] "Watching apiserver" Jan 29 11:25:37.531293 kubelet[1840]: E0129 11:25:37.529363 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:37.552002 systemd[1]: Created slice kubepods-besteffort-pod468190a3_6194_48e7_bd47_286ac429550c.slice - libcontainer container kubepods-besteffort-pod468190a3_6194_48e7_bd47_286ac429550c.slice. Jan 29 11:25:37.564705 kubelet[1840]: I0129 11:25:37.564662 1840 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:25:37.566279 kubelet[1840]: I0129 11:25:37.566074 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/618d298b-3aee-418b-8f1a-093ea40b4ebb-kubelet-dir\") pod \"csi-node-driver-86gkr\" (UID: \"618d298b-3aee-418b-8f1a-093ea40b4ebb\") " pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:37.566279 kubelet[1840]: I0129 11:25:37.566170 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/618d298b-3aee-418b-8f1a-093ea40b4ebb-registration-dir\") pod \"csi-node-driver-86gkr\" (UID: \"618d298b-3aee-418b-8f1a-093ea40b4ebb\") " pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:37.566279 kubelet[1840]: I0129 11:25:37.566265 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-lib-modules\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.566831 kubelet[1840]: I0129 11:25:37.566330 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-xtables-lock\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.566831 kubelet[1840]: I0129 11:25:37.566375 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-cni-bin-dir\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.566831 kubelet[1840]: I0129 11:25:37.566468 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-cni-log-dir\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.566831 kubelet[1840]: I0129 11:25:37.566536 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468190a3-6194-48e7-bd47-286ac429550c-lib-modules\") pod \"kube-proxy-gjgsf\" (UID: \"468190a3-6194-48e7-bd47-286ac429550c\") " pod="kube-system/kube-proxy-gjgsf" Jan 29 11:25:37.566831 kubelet[1840]: I0129 11:25:37.566578 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/87cff40c-e000-45d5-af33-7047e5588794-node-certs\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.567688 kubelet[1840]: I0129 11:25:37.566619 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-flexvol-driver-host\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.567688 kubelet[1840]: I0129 11:25:37.566661 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hs9f\" (UniqueName: \"kubernetes.io/projected/87cff40c-e000-45d5-af33-7047e5588794-kube-api-access-4hs9f\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.567688 kubelet[1840]: I0129 11:25:37.566700 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/468190a3-6194-48e7-bd47-286ac429550c-kube-proxy\") pod \"kube-proxy-gjgsf\" (UID: \"468190a3-6194-48e7-bd47-286ac429550c\") " pod="kube-system/kube-proxy-gjgsf" Jan 29 11:25:37.567688 kubelet[1840]: I0129 11:25:37.566751 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/468190a3-6194-48e7-bd47-286ac429550c-xtables-lock\") pod \"kube-proxy-gjgsf\" (UID: \"468190a3-6194-48e7-bd47-286ac429550c\") " pod="kube-system/kube-proxy-gjgsf" Jan 29 11:25:37.567688 kubelet[1840]: I0129 11:25:37.566793 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ltqb\" (UniqueName: \"kubernetes.io/projected/468190a3-6194-48e7-bd47-286ac429550c-kube-api-access-8ltqb\") pod \"kube-proxy-gjgsf\" (UID: \"468190a3-6194-48e7-bd47-286ac429550c\") " pod="kube-system/kube-proxy-gjgsf" Jan 29 11:25:37.567996 kubelet[1840]: I0129 11:25:37.566843 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-var-run-calico\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.567996 kubelet[1840]: I0129 11:25:37.566886 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-var-lib-calico\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.567996 kubelet[1840]: I0129 11:25:37.566927 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-cni-net-dir\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.567996 kubelet[1840]: I0129 11:25:37.566981 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/618d298b-3aee-418b-8f1a-093ea40b4ebb-varrun\") pod \"csi-node-driver-86gkr\" (UID: \"618d298b-3aee-418b-8f1a-093ea40b4ebb\") " pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:37.569601 kubelet[1840]: I0129 11:25:37.567022 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/87cff40c-e000-45d5-af33-7047e5588794-policysync\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.569601 kubelet[1840]: I0129 11:25:37.569350 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87cff40c-e000-45d5-af33-7047e5588794-tigera-ca-bundle\") pod \"calico-node-n7gxb\" (UID: \"87cff40c-e000-45d5-af33-7047e5588794\") " pod="calico-system/calico-node-n7gxb" Jan 29 11:25:37.569601 kubelet[1840]: I0129 11:25:37.569465 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/618d298b-3aee-418b-8f1a-093ea40b4ebb-socket-dir\") pod \"csi-node-driver-86gkr\" (UID: \"618d298b-3aee-418b-8f1a-093ea40b4ebb\") " pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:37.569986 kubelet[1840]: I0129 11:25:37.569519 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92sr\" (UniqueName: \"kubernetes.io/projected/618d298b-3aee-418b-8f1a-093ea40b4ebb-kube-api-access-b92sr\") pod \"csi-node-driver-86gkr\" (UID: \"618d298b-3aee-418b-8f1a-093ea40b4ebb\") " pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:37.581382 systemd[1]: Created slice kubepods-besteffort-pod87cff40c_e000_45d5_af33_7047e5588794.slice - libcontainer container kubepods-besteffort-pod87cff40c_e000_45d5_af33_7047e5588794.slice. Jan 29 11:25:37.684146 kubelet[1840]: E0129 11:25:37.684066 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:25:37.684261 kubelet[1840]: W0129 11:25:37.684245 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:25:37.684355 kubelet[1840]: E0129 11:25:37.684341 1840 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:25:37.697952 kubelet[1840]: E0129 11:25:37.697912 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:25:37.698078 kubelet[1840]: W0129 11:25:37.698066 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:25:37.698189 kubelet[1840]: E0129 11:25:37.698177 1840 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:25:37.698715 kubelet[1840]: E0129 11:25:37.698642 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:25:37.698853 kubelet[1840]: W0129 11:25:37.698810 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:25:37.698853 kubelet[1840]: E0129 11:25:37.698827 1840 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:25:37.704993 kubelet[1840]: E0129 11:25:37.704938 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:25:37.704993 kubelet[1840]: W0129 11:25:37.704953 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:25:37.704993 kubelet[1840]: E0129 11:25:37.704968 1840 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:25:37.871806 containerd[1452]: time="2025-01-29T11:25:37.871640344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjgsf,Uid:468190a3-6194-48e7-bd47-286ac429550c,Namespace:kube-system,Attempt:0,}" Jan 29 11:25:37.892177 containerd[1452]: time="2025-01-29T11:25:37.891629975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7gxb,Uid:87cff40c-e000-45d5-af33-7047e5588794,Namespace:calico-system,Attempt:0,}" Jan 29 11:25:38.520707 kubelet[1840]: E0129 11:25:38.520625 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:38.561681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746643070.mount: Deactivated successfully. Jan 29 11:25:38.578579 containerd[1452]: time="2025-01-29T11:25:38.578059754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:25:38.580937 containerd[1452]: time="2025-01-29T11:25:38.580806185Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:25:38.582851 containerd[1452]: time="2025-01-29T11:25:38.582766691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 11:25:38.584502 containerd[1452]: time="2025-01-29T11:25:38.584344631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:25:38.586136 containerd[1452]: time="2025-01-29T11:25:38.586010826Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:25:38.603420 containerd[1452]: time="2025-01-29T11:25:38.602518005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:25:38.606435 containerd[1452]: time="2025-01-29T11:25:38.605574638Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 733.122481ms" Jan 29 11:25:38.610914 containerd[1452]: time="2025-01-29T11:25:38.610798776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.991759ms" Jan 29 11:25:38.782099 containerd[1452]: time="2025-01-29T11:25:38.781850173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:25:38.782099 containerd[1452]: time="2025-01-29T11:25:38.781907690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:25:38.783445 containerd[1452]: time="2025-01-29T11:25:38.781925854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:38.783445 containerd[1452]: time="2025-01-29T11:25:38.782760029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:38.785369 containerd[1452]: time="2025-01-29T11:25:38.785303930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:25:38.785514 containerd[1452]: time="2025-01-29T11:25:38.785458320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:25:38.785514 containerd[1452]: time="2025-01-29T11:25:38.785486252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:38.785767 containerd[1452]: time="2025-01-29T11:25:38.785673754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:25:38.872772 systemd[1]: run-containerd-runc-k8s.io-a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea-runc.u8pepO.mount: Deactivated successfully. Jan 29 11:25:38.884556 systemd[1]: Started cri-containerd-a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea.scope - libcontainer container a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea. Jan 29 11:25:38.886281 systemd[1]: Started cri-containerd-ab58607e25266e9bd41819d478e600232c71002adaf632b871b071a6d46df769.scope - libcontainer container ab58607e25266e9bd41819d478e600232c71002adaf632b871b071a6d46df769. Jan 29 11:25:38.920893 containerd[1452]: time="2025-01-29T11:25:38.920623021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7gxb,Uid:87cff40c-e000-45d5-af33-7047e5588794,Namespace:calico-system,Attempt:0,} returns sandbox id \"a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea\"" Jan 29 11:25:38.923468 containerd[1452]: time="2025-01-29T11:25:38.923442499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:25:38.929996 containerd[1452]: time="2025-01-29T11:25:38.929856438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjgsf,Uid:468190a3-6194-48e7-bd47-286ac429550c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab58607e25266e9bd41819d478e600232c71002adaf632b871b071a6d46df769\"" Jan 29 11:25:39.521742 kubelet[1840]: E0129 11:25:39.521660 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:39.656289 kubelet[1840]: E0129 11:25:39.656134 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:40.484256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399798251.mount: Deactivated successfully. Jan 29 11:25:40.522594 kubelet[1840]: E0129 11:25:40.522498 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:40.673126 containerd[1452]: time="2025-01-29T11:25:40.672104421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:25:40.673126 containerd[1452]: time="2025-01-29T11:25:40.672439643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:40.676043 containerd[1452]: time="2025-01-29T11:25:40.675419071Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:40.677442 containerd[1452]: time="2025-01-29T11:25:40.677419176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:40.678345 containerd[1452]: time="2025-01-29T11:25:40.678310374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.754759613s" Jan 29 11:25:40.678418 containerd[1452]: time="2025-01-29T11:25:40.678343286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:25:40.680029 containerd[1452]: time="2025-01-29T11:25:40.679961662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:25:40.680969 containerd[1452]: time="2025-01-29T11:25:40.680806433Z" level=info msg="CreateContainer within sandbox \"a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:25:40.703211 containerd[1452]: time="2025-01-29T11:25:40.703175021Z" level=info msg="CreateContainer within sandbox \"a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2\"" Jan 29 11:25:40.704507 containerd[1452]: time="2025-01-29T11:25:40.704003000Z" level=info msg="StartContainer for \"772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2\"" Jan 29 11:25:40.740526 systemd[1]: Started cri-containerd-772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2.scope - libcontainer container 772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2. Jan 29 11:25:40.783510 containerd[1452]: time="2025-01-29T11:25:40.783454495Z" level=info msg="StartContainer for \"772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2\" returns successfully" Jan 29 11:25:40.788923 systemd[1]: cri-containerd-772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2.scope: Deactivated successfully. Jan 29 11:25:41.005856 containerd[1452]: time="2025-01-29T11:25:41.005318536Z" level=info msg="shim disconnected" id=772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2 namespace=k8s.io Jan 29 11:25:41.005856 containerd[1452]: time="2025-01-29T11:25:41.005445886Z" level=warning msg="cleaning up after shim disconnected" id=772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2 namespace=k8s.io Jan 29 11:25:41.005856 containerd[1452]: time="2025-01-29T11:25:41.005469931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:25:41.433496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-772f9f6f813d58c0b086578632409716fec1759b91a89bf29e3a5b1ae79131f2-rootfs.mount: Deactivated successfully. Jan 29 11:25:41.523540 kubelet[1840]: E0129 11:25:41.523353 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:41.656413 kubelet[1840]: E0129 11:25:41.655585 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:42.109592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434079628.mount: Deactivated successfully. Jan 29 11:25:42.524336 kubelet[1840]: E0129 11:25:42.524230 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:42.658125 containerd[1452]: time="2025-01-29T11:25:42.658080355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:42.663381 containerd[1452]: time="2025-01-29T11:25:42.663232650Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 29 11:25:42.664895 containerd[1452]: time="2025-01-29T11:25:42.664806009Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:42.668032 containerd[1452]: time="2025-01-29T11:25:42.667979361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:42.668665 containerd[1452]: time="2025-01-29T11:25:42.668624625Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.988627336s" Jan 29 11:25:42.668724 containerd[1452]: time="2025-01-29T11:25:42.668664720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:25:42.670557 containerd[1452]: time="2025-01-29T11:25:42.670343369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:25:42.670838 containerd[1452]: time="2025-01-29T11:25:42.670801771Z" level=info msg="CreateContainer within sandbox \"ab58607e25266e9bd41819d478e600232c71002adaf632b871b071a6d46df769\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:25:42.694969 containerd[1452]: time="2025-01-29T11:25:42.694922858Z" level=info msg="CreateContainer within sandbox \"ab58607e25266e9bd41819d478e600232c71002adaf632b871b071a6d46df769\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ca7a760618acd28abdda234506e2d599915b28f6cf2094d5c0857f90a85d7ae\"" Jan 29 11:25:42.695371 containerd[1452]: time="2025-01-29T11:25:42.695346295Z" level=info msg="StartContainer for \"7ca7a760618acd28abdda234506e2d599915b28f6cf2094d5c0857f90a85d7ae\"" Jan 29 11:25:42.725539 systemd[1]: Started cri-containerd-7ca7a760618acd28abdda234506e2d599915b28f6cf2094d5c0857f90a85d7ae.scope - libcontainer container 7ca7a760618acd28abdda234506e2d599915b28f6cf2094d5c0857f90a85d7ae. Jan 29 11:25:42.755599 containerd[1452]: time="2025-01-29T11:25:42.755480786Z" level=info msg="StartContainer for \"7ca7a760618acd28abdda234506e2d599915b28f6cf2094d5c0857f90a85d7ae\" returns successfully" Jan 29 11:25:43.525072 kubelet[1840]: E0129 11:25:43.524921 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:43.656352 kubelet[1840]: E0129 11:25:43.656201 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:44.525783 kubelet[1840]: E0129 11:25:44.525602 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:45.527015 kubelet[1840]: E0129 11:25:45.526554 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:45.656926 kubelet[1840]: E0129 11:25:45.656201 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:46.527609 kubelet[1840]: E0129 11:25:46.527444 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:47.529371 kubelet[1840]: E0129 11:25:47.529280 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:47.655640 kubelet[1840]: E0129 11:25:47.655603 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:48.320354 containerd[1452]: time="2025-01-29T11:25:48.320317101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:48.321537 containerd[1452]: time="2025-01-29T11:25:48.321478524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:25:48.322189 containerd[1452]: time="2025-01-29T11:25:48.322140338Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:48.325487 containerd[1452]: time="2025-01-29T11:25:48.325443146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:25:48.326576 containerd[1452]: time="2025-01-29T11:25:48.326382391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.656003146s" Jan 29 11:25:48.326576 containerd[1452]: time="2025-01-29T11:25:48.326457994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:25:48.328955 containerd[1452]: time="2025-01-29T11:25:48.328932034Z" level=info msg="CreateContainer within sandbox \"a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:25:48.345142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693882490.mount: Deactivated successfully. Jan 29 11:25:48.349650 containerd[1452]: time="2025-01-29T11:25:48.349620511Z" level=info msg="CreateContainer within sandbox \"a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b\"" Jan 29 11:25:48.350147 containerd[1452]: time="2025-01-29T11:25:48.350114921Z" level=info msg="StartContainer for \"09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b\"" Jan 29 11:25:48.383552 systemd[1]: Started cri-containerd-09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b.scope - libcontainer container 09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b. Jan 29 11:25:48.413144 containerd[1452]: time="2025-01-29T11:25:48.412994042Z" level=info msg="StartContainer for \"09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b\" returns successfully" Jan 29 11:25:48.529974 kubelet[1840]: E0129 11:25:48.529889 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:48.764983 kubelet[1840]: I0129 11:25:48.764868 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gjgsf" podStartSLOduration=9.027161322 podStartE2EDuration="12.764835665s" podCreationTimestamp="2025-01-29 11:25:36 +0000 UTC" firstStartedPulling="2025-01-29 11:25:38.932131505 +0000 UTC m=+2.837649382" lastFinishedPulling="2025-01-29 11:25:42.669805838 +0000 UTC m=+6.575323725" observedRunningTime="2025-01-29 11:25:43.705901815 +0000 UTC m=+7.611419742" watchObservedRunningTime="2025-01-29 11:25:48.764835665 +0000 UTC m=+12.670353592" Jan 29 11:25:49.530447 kubelet[1840]: E0129 11:25:49.530250 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:49.532935 containerd[1452]: time="2025-01-29T11:25:49.532847368Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:25:49.538932 systemd[1]: cri-containerd-09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b.scope: Deactivated successfully. Jan 29 11:25:49.585111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b-rootfs.mount: Deactivated successfully. Jan 29 11:25:49.585542 kubelet[1840]: I0129 11:25:49.585168 1840 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:25:49.670590 systemd[1]: Created slice kubepods-besteffort-pod618d298b_3aee_418b_8f1a_093ea40b4ebb.slice - libcontainer container kubepods-besteffort-pod618d298b_3aee_418b_8f1a_093ea40b4ebb.slice. Jan 29 11:25:49.678071 containerd[1452]: time="2025-01-29T11:25:49.677981624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:0,}" Jan 29 11:25:50.531452 kubelet[1840]: E0129 11:25:50.531324 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:50.829270 containerd[1452]: time="2025-01-29T11:25:50.829022864Z" level=info msg="shim disconnected" id=09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b namespace=k8s.io Jan 29 11:25:50.833105 containerd[1452]: time="2025-01-29T11:25:50.830186700Z" level=warning msg="cleaning up after shim disconnected" id=09a32d28b1085ef645a03fb8ab24276b8503a52e29fd0998734e40a65425250b namespace=k8s.io Jan 29 11:25:50.833105 containerd[1452]: time="2025-01-29T11:25:50.830298802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:25:50.959671 containerd[1452]: time="2025-01-29T11:25:50.959601362Z" level=error msg="Failed to destroy network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:50.960365 containerd[1452]: time="2025-01-29T11:25:50.960312328Z" level=error msg="encountered an error cleaning up failed sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:50.960631 containerd[1452]: time="2025-01-29T11:25:50.960580733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:50.961102 kubelet[1840]: E0129 11:25:50.961046 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:50.961355 kubelet[1840]: E0129 11:25:50.961318 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:50.961567 kubelet[1840]: E0129 11:25:50.961529 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:50.961788 kubelet[1840]: E0129 11:25:50.961735 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:50.962498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484-shm.mount: Deactivated successfully. Jan 29 11:25:51.531947 kubelet[1840]: E0129 11:25:51.531871 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:51.687529 update_engine[1440]: I20250129 11:25:51.687350 1440 update_attempter.cc:509] Updating boot flags... Jan 29 11:25:51.715156 containerd[1452]: time="2025-01-29T11:25:51.714642681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:25:51.718630 kubelet[1840]: I0129 11:25:51.718571 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484" Jan 29 11:25:51.726463 containerd[1452]: time="2025-01-29T11:25:51.725556998Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:51.726463 containerd[1452]: time="2025-01-29T11:25:51.725999430Z" level=info msg="Ensure that sandbox 6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484 in task-service has been cleanup successfully" Jan 29 11:25:51.731887 systemd[1]: run-netns-cni\x2dc5633fed\x2ddf28\x2d28ce\x2dc8bb\x2da58aabcc4a66.mount: Deactivated successfully. Jan 29 11:25:51.733700 containerd[1452]: time="2025-01-29T11:25:51.732462373Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:51.733700 containerd[1452]: time="2025-01-29T11:25:51.732512247Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:51.733841 containerd[1452]: time="2025-01-29T11:25:51.733692164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:1,}" Jan 29 11:25:51.783679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2329) Jan 29 11:25:51.852418 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2330) Jan 29 11:25:51.913508 containerd[1452]: time="2025-01-29T11:25:51.913455436Z" level=error msg="Failed to destroy network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:51.915454 containerd[1452]: time="2025-01-29T11:25:51.915212437Z" level=error msg="encountered an error cleaning up failed sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:51.915454 containerd[1452]: time="2025-01-29T11:25:51.915287970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:51.915547 kubelet[1840]: E0129 11:25:51.915499 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:51.915595 kubelet[1840]: E0129 11:25:51.915553 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:51.915595 kubelet[1840]: E0129 11:25:51.915576 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:51.915650 kubelet[1840]: E0129 11:25:51.915616 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:51.916228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734-shm.mount: Deactivated successfully. Jan 29 11:25:51.922515 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2330) Jan 29 11:25:52.532787 kubelet[1840]: E0129 11:25:52.532727 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:52.723026 kubelet[1840]: I0129 11:25:52.722871 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734" Jan 29 11:25:52.723939 containerd[1452]: time="2025-01-29T11:25:52.723787102Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:52.725569 containerd[1452]: time="2025-01-29T11:25:52.725503526Z" level=info msg="Ensure that sandbox 4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734 in task-service has been cleanup successfully" Jan 29 11:25:52.728462 containerd[1452]: time="2025-01-29T11:25:52.725946227Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:52.728462 containerd[1452]: time="2025-01-29T11:25:52.726032840Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:52.731488 systemd[1]: run-netns-cni\x2dff0da59f\x2d05a6\x2d9995\x2d4bf9\x2db7cfee4ecff7.mount: Deactivated successfully. Jan 29 11:25:52.735126 containerd[1452]: time="2025-01-29T11:25:52.733758886Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:52.735126 containerd[1452]: time="2025-01-29T11:25:52.733956346Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:52.735126 containerd[1452]: time="2025-01-29T11:25:52.733986843Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:52.737424 containerd[1452]: time="2025-01-29T11:25:52.737328972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:2,}" Jan 29 11:25:52.885505 containerd[1452]: time="2025-01-29T11:25:52.885342177Z" level=error msg="Failed to destroy network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:52.887114 containerd[1452]: time="2025-01-29T11:25:52.887031611Z" level=error msg="encountered an error cleaning up failed sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:52.887114 containerd[1452]: time="2025-01-29T11:25:52.887096322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:52.887688 kubelet[1840]: E0129 11:25:52.887290 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:52.887688 kubelet[1840]: E0129 11:25:52.887348 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:52.887688 kubelet[1840]: E0129 11:25:52.887371 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:52.887987 kubelet[1840]: E0129 11:25:52.887435 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:52.888680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c-shm.mount: Deactivated successfully. Jan 29 11:25:53.533890 kubelet[1840]: E0129 11:25:53.533781 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:53.733433 kubelet[1840]: I0129 11:25:53.727630 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c" Jan 29 11:25:53.733586 containerd[1452]: time="2025-01-29T11:25:53.728843336Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:25:53.733586 containerd[1452]: time="2025-01-29T11:25:53.729219323Z" level=info msg="Ensure that sandbox 7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c in task-service has been cleanup successfully" Jan 29 11:25:53.734617 containerd[1452]: time="2025-01-29T11:25:53.734328841Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:25:53.734617 containerd[1452]: time="2025-01-29T11:25:53.734373676Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:25:53.737242 systemd[1]: run-netns-cni\x2df31535bf\x2d7f7f\x2dbefd\x2d8448\x2d58a511c15dca.mount: Deactivated successfully. Jan 29 11:25:53.741690 containerd[1452]: time="2025-01-29T11:25:53.740965959Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:53.741690 containerd[1452]: time="2025-01-29T11:25:53.741141068Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:53.741690 containerd[1452]: time="2025-01-29T11:25:53.741168259Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:53.742500 containerd[1452]: time="2025-01-29T11:25:53.742099879Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:53.742500 containerd[1452]: time="2025-01-29T11:25:53.742236636Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:53.742500 containerd[1452]: time="2025-01-29T11:25:53.742326836Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:53.743484 containerd[1452]: time="2025-01-29T11:25:53.743382629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:3,}" Jan 29 11:25:53.855709 containerd[1452]: time="2025-01-29T11:25:53.855618285Z" level=error msg="Failed to destroy network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:53.856749 containerd[1452]: time="2025-01-29T11:25:53.856612372Z" level=error msg="encountered an error cleaning up failed sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:53.856749 containerd[1452]: time="2025-01-29T11:25:53.856666123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:53.857522 kubelet[1840]: E0129 11:25:53.856939 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:53.857522 kubelet[1840]: E0129 11:25:53.856992 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:53.857522 kubelet[1840]: E0129 11:25:53.857013 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:53.857631 kubelet[1840]: E0129 11:25:53.857081 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:53.857922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237-shm.mount: Deactivated successfully. Jan 29 11:25:54.534496 kubelet[1840]: E0129 11:25:54.534432 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:54.733271 kubelet[1840]: I0129 11:25:54.733243 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237" Jan 29 11:25:54.735029 containerd[1452]: time="2025-01-29T11:25:54.734971233Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:25:54.736399 containerd[1452]: time="2025-01-29T11:25:54.735365423Z" level=info msg="Ensure that sandbox 86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237 in task-service has been cleanup successfully" Jan 29 11:25:54.739586 containerd[1452]: time="2025-01-29T11:25:54.739515599Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:25:54.739586 containerd[1452]: time="2025-01-29T11:25:54.739571984Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:25:54.740179 containerd[1452]: time="2025-01-29T11:25:54.740130593Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:25:54.740411 containerd[1452]: time="2025-01-29T11:25:54.740358301Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:25:54.740469 containerd[1452]: time="2025-01-29T11:25:54.740446196Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:25:54.742141 systemd[1]: run-netns-cni\x2d0d17a0a7\x2d7eb9\x2d4dae\x2d09b2\x2d9b10593d0b6a.mount: Deactivated successfully. Jan 29 11:25:54.744422 containerd[1452]: time="2025-01-29T11:25:54.744332946Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:54.744585 containerd[1452]: time="2025-01-29T11:25:54.744547259Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:54.744623 containerd[1452]: time="2025-01-29T11:25:54.744585601Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:54.748114 containerd[1452]: time="2025-01-29T11:25:54.748066941Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:54.748275 containerd[1452]: time="2025-01-29T11:25:54.748238973Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:54.748312 containerd[1452]: time="2025-01-29T11:25:54.748276243Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:54.749349 containerd[1452]: time="2025-01-29T11:25:54.749297762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:4,}" Jan 29 11:25:54.884849 containerd[1452]: time="2025-01-29T11:25:54.884735569Z" level=error msg="Failed to destroy network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:54.885422 containerd[1452]: time="2025-01-29T11:25:54.885219648Z" level=error msg="encountered an error cleaning up failed sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:54.885422 containerd[1452]: time="2025-01-29T11:25:54.885281904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:54.885950 kubelet[1840]: E0129 11:25:54.885619 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:54.885950 kubelet[1840]: E0129 11:25:54.885678 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:54.885950 kubelet[1840]: E0129 11:25:54.885702 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:54.886102 kubelet[1840]: E0129 11:25:54.885745 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:54.887615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e-shm.mount: Deactivated successfully. Jan 29 11:25:55.234855 systemd[1]: Created slice kubepods-besteffort-podea5c4cb6_6ccb_4d9d_8ce1_844dcadd8be8.slice - libcontainer container kubepods-besteffort-podea5c4cb6_6ccb_4d9d_8ce1_844dcadd8be8.slice. Jan 29 11:25:55.386311 kubelet[1840]: I0129 11:25:55.386075 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vr2\" (UniqueName: \"kubernetes.io/projected/ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8-kube-api-access-c2vr2\") pod \"nginx-deployment-8587fbcb89-4m8f7\" (UID: \"ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8\") " pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:55.534790 kubelet[1840]: E0129 11:25:55.534614 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:55.545043 containerd[1452]: time="2025-01-29T11:25:55.544338971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:0,}" Jan 29 11:25:55.664244 containerd[1452]: time="2025-01-29T11:25:55.664203545Z" level=error msg="Failed to destroy network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.665236 containerd[1452]: time="2025-01-29T11:25:55.664635195Z" level=error msg="encountered an error cleaning up failed sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.665236 containerd[1452]: time="2025-01-29T11:25:55.664690590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.665362 kubelet[1840]: E0129 11:25:55.664867 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.665362 kubelet[1840]: E0129 11:25:55.664929 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:55.665362 kubelet[1840]: E0129 11:25:55.664952 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:55.665541 kubelet[1840]: E0129 11:25:55.664994 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4m8f7" podUID="ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8" Jan 29 11:25:55.748072 kubelet[1840]: I0129 11:25:55.747980 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e" Jan 29 11:25:55.750082 containerd[1452]: time="2025-01-29T11:25:55.748956844Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:25:55.750082 containerd[1452]: time="2025-01-29T11:25:55.749269802Z" level=info msg="Ensure that sandbox 0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e in task-service has been cleanup successfully" Jan 29 11:25:55.751797 containerd[1452]: time="2025-01-29T11:25:55.751558100Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:25:55.751797 containerd[1452]: time="2025-01-29T11:25:55.751597834Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:25:55.757462 containerd[1452]: time="2025-01-29T11:25:55.755886660Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:25:55.757462 containerd[1452]: time="2025-01-29T11:25:55.756024629Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:25:55.757462 containerd[1452]: time="2025-01-29T11:25:55.756045388Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:25:55.758376 systemd[1]: run-netns-cni\x2d259bfd7d\x2d4a84\x2db41a\x2df14b\x2d9fbacc48ff1c.mount: Deactivated successfully. Jan 29 11:25:55.762331 containerd[1452]: time="2025-01-29T11:25:55.760153614Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:25:55.762331 containerd[1452]: time="2025-01-29T11:25:55.760292424Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:25:55.762331 containerd[1452]: time="2025-01-29T11:25:55.760313113Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:25:55.762586 kubelet[1840]: I0129 11:25:55.760832 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca" Jan 29 11:25:55.763677 containerd[1452]: time="2025-01-29T11:25:55.763638408Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:55.764857 containerd[1452]: time="2025-01-29T11:25:55.763978097Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:55.764857 containerd[1452]: time="2025-01-29T11:25:55.764007622Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:55.764857 containerd[1452]: time="2025-01-29T11:25:55.764207327Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:25:55.765510 containerd[1452]: time="2025-01-29T11:25:55.765471131Z" level=info msg="Ensure that sandbox d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca in task-service has been cleanup successfully" Jan 29 11:25:55.770198 containerd[1452]: time="2025-01-29T11:25:55.768577755Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:25:55.772069 containerd[1452]: time="2025-01-29T11:25:55.771477381Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:25:55.772069 containerd[1452]: time="2025-01-29T11:25:55.771785420Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:55.772069 containerd[1452]: time="2025-01-29T11:25:55.771940230Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:55.772069 containerd[1452]: time="2025-01-29T11:25:55.771961260Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:55.772339 systemd[1]: run-netns-cni\x2d42e68517\x2df574\x2ddf33\x2dc4ce\x2d6049fe2df2ec.mount: Deactivated successfully. Jan 29 11:25:55.776526 containerd[1452]: time="2025-01-29T11:25:55.775665998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:5,}" Jan 29 11:25:55.786985 containerd[1452]: time="2025-01-29T11:25:55.786816760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:1,}" Jan 29 11:25:55.897073 containerd[1452]: time="2025-01-29T11:25:55.897022333Z" level=error msg="Failed to destroy network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.897423 containerd[1452]: time="2025-01-29T11:25:55.897372531Z" level=error msg="encountered an error cleaning up failed sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.897478 containerd[1452]: time="2025-01-29T11:25:55.897448293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.898632 kubelet[1840]: E0129 11:25:55.898174 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.898632 kubelet[1840]: E0129 11:25:55.898255 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:55.898632 kubelet[1840]: E0129 11:25:55.898321 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:55.898771 kubelet[1840]: E0129 11:25:55.898371 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:55.914513 containerd[1452]: time="2025-01-29T11:25:55.914462949Z" level=error msg="Failed to destroy network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.914780 containerd[1452]: time="2025-01-29T11:25:55.914749497Z" level=error msg="encountered an error cleaning up failed sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.914831 containerd[1452]: time="2025-01-29T11:25:55.914805783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.915441 kubelet[1840]: E0129 11:25:55.915085 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:55.915441 kubelet[1840]: E0129 11:25:55.915154 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:55.915441 kubelet[1840]: E0129 11:25:55.915176 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:55.915580 kubelet[1840]: E0129 11:25:55.915216 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4m8f7" podUID="ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8" Jan 29 11:25:56.517588 kubelet[1840]: E0129 11:25:56.517476 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:56.535006 kubelet[1840]: E0129 11:25:56.534942 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:56.740355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b-shm.mount: Deactivated successfully. Jan 29 11:25:56.740470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4-shm.mount: Deactivated successfully. Jan 29 11:25:56.773354 kubelet[1840]: I0129 11:25:56.772223 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4" Jan 29 11:25:56.773469 containerd[1452]: time="2025-01-29T11:25:56.772898349Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:25:56.773469 containerd[1452]: time="2025-01-29T11:25:56.773112101Z" level=info msg="Ensure that sandbox 5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4 in task-service has been cleanup successfully" Jan 29 11:25:56.775231 systemd[1]: run-netns-cni\x2d14960656\x2d14a0\x2d2d5f\x2d2193\x2d9f11d7ba670e.mount: Deactivated successfully. Jan 29 11:25:56.776522 containerd[1452]: time="2025-01-29T11:25:56.776486628Z" level=info msg="TearDown network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" successfully" Jan 29 11:25:56.776522 containerd[1452]: time="2025-01-29T11:25:56.776512507Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" returns successfully" Jan 29 11:25:56.778661 containerd[1452]: time="2025-01-29T11:25:56.778625054Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:25:56.778754 containerd[1452]: time="2025-01-29T11:25:56.778708261Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:25:56.778809 containerd[1452]: time="2025-01-29T11:25:56.778752925Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:25:56.779709 containerd[1452]: time="2025-01-29T11:25:56.779648326Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:25:56.779760 containerd[1452]: time="2025-01-29T11:25:56.779715482Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:25:56.779760 containerd[1452]: time="2025-01-29T11:25:56.779727454Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:25:56.780657 containerd[1452]: time="2025-01-29T11:25:56.780602508Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:25:56.780711 containerd[1452]: time="2025-01-29T11:25:56.780671137Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:25:56.780711 containerd[1452]: time="2025-01-29T11:25:56.780683430Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:25:56.781644 kubelet[1840]: I0129 11:25:56.781073 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b" Jan 29 11:25:56.782037 containerd[1452]: time="2025-01-29T11:25:56.782010332Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:56.782131 containerd[1452]: time="2025-01-29T11:25:56.782108165Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:56.782166 containerd[1452]: time="2025-01-29T11:25:56.782130778Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:56.782220 containerd[1452]: time="2025-01-29T11:25:56.782196842Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:25:56.783688 containerd[1452]: time="2025-01-29T11:25:56.783512674Z" level=info msg="Ensure that sandbox a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b in task-service has been cleanup successfully" Jan 29 11:25:56.784017 containerd[1452]: time="2025-01-29T11:25:56.783781618Z" level=info msg="TearDown network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" successfully" Jan 29 11:25:56.785753 systemd[1]: run-netns-cni\x2de11ff497\x2dbfd1\x2d40f9\x2d2f40\x2dd3c14b604220.mount: Deactivated successfully. Jan 29 11:25:56.786008 containerd[1452]: time="2025-01-29T11:25:56.785753571Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" returns successfully" Jan 29 11:25:56.787593 containerd[1452]: time="2025-01-29T11:25:56.786982810Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:25:56.787593 containerd[1452]: time="2025-01-29T11:25:56.787065345Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:25:56.787593 containerd[1452]: time="2025-01-29T11:25:56.787077207Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:25:56.787593 containerd[1452]: time="2025-01-29T11:25:56.787123404Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:56.787593 containerd[1452]: time="2025-01-29T11:25:56.787183487Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:56.787593 containerd[1452]: time="2025-01-29T11:25:56.787194458Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:56.788504 containerd[1452]: time="2025-01-29T11:25:56.787986776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:6,}" Jan 29 11:25:56.788504 containerd[1452]: time="2025-01-29T11:25:56.788213311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:2,}" Jan 29 11:25:56.908977 containerd[1452]: time="2025-01-29T11:25:56.908856387Z" level=error msg="Failed to destroy network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.909719 containerd[1452]: time="2025-01-29T11:25:56.909693399Z" level=error msg="encountered an error cleaning up failed sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.910139 containerd[1452]: time="2025-01-29T11:25:56.910114329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.910511 kubelet[1840]: E0129 11:25:56.910473 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.910582 kubelet[1840]: E0129 11:25:56.910533 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:56.910582 kubelet[1840]: E0129 11:25:56.910555 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:56.910644 kubelet[1840]: E0129 11:25:56.910599 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:56.929359 containerd[1452]: time="2025-01-29T11:25:56.929209339Z" level=error msg="Failed to destroy network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.929749 containerd[1452]: time="2025-01-29T11:25:56.929714217Z" level=error msg="encountered an error cleaning up failed sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.930375 containerd[1452]: time="2025-01-29T11:25:56.929842368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.930511 kubelet[1840]: E0129 11:25:56.930015 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:56.930511 kubelet[1840]: E0129 11:25:56.930069 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:56.930511 kubelet[1840]: E0129 11:25:56.930090 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:56.930626 kubelet[1840]: E0129 11:25:56.930139 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4m8f7" podUID="ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8" Jan 29 11:25:57.536028 kubelet[1840]: E0129 11:25:57.535948 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:57.738019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e-shm.mount: Deactivated successfully. Jan 29 11:25:57.738127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46-shm.mount: Deactivated successfully. Jan 29 11:25:57.785164 kubelet[1840]: I0129 11:25:57.784996 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e" Jan 29 11:25:57.785695 containerd[1452]: time="2025-01-29T11:25:57.785655418Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" Jan 29 11:25:57.786787 containerd[1452]: time="2025-01-29T11:25:57.786537305Z" level=info msg="Ensure that sandbox c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e in task-service has been cleanup successfully" Jan 29 11:25:57.788656 containerd[1452]: time="2025-01-29T11:25:57.788461638Z" level=info msg="TearDown network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" successfully" Jan 29 11:25:57.788656 containerd[1452]: time="2025-01-29T11:25:57.788482036Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" returns successfully" Jan 29 11:25:57.788540 systemd[1]: run-netns-cni\x2d5822eba6\x2dcd05\x2dd216\x2dadd2\x2db31d0e5fdf12.mount: Deactivated successfully. Jan 29 11:25:57.790417 containerd[1452]: time="2025-01-29T11:25:57.790314096Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:25:57.791079 containerd[1452]: time="2025-01-29T11:25:57.790939451Z" level=info msg="TearDown network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" successfully" Jan 29 11:25:57.791079 containerd[1452]: time="2025-01-29T11:25:57.791069254Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" returns successfully" Jan 29 11:25:57.793646 containerd[1452]: time="2025-01-29T11:25:57.793606088Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:25:57.793713 containerd[1452]: time="2025-01-29T11:25:57.793694293Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:25:57.793750 containerd[1452]: time="2025-01-29T11:25:57.793712538Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:25:57.794501 containerd[1452]: time="2025-01-29T11:25:57.794315440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:3,}" Jan 29 11:25:57.810419 kubelet[1840]: I0129 11:25:57.809853 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46" Jan 29 11:25:57.817238 containerd[1452]: time="2025-01-29T11:25:57.814922467Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" Jan 29 11:25:57.817844 containerd[1452]: time="2025-01-29T11:25:57.817727183Z" level=info msg="Ensure that sandbox 96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46 in task-service has been cleanup successfully" Jan 29 11:25:57.817951 containerd[1452]: time="2025-01-29T11:25:57.817933822Z" level=info msg="TearDown network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" successfully" Jan 29 11:25:57.818637 containerd[1452]: time="2025-01-29T11:25:57.817997020Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" returns successfully" Jan 29 11:25:57.819026 containerd[1452]: time="2025-01-29T11:25:57.818769230Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:25:57.820806 containerd[1452]: time="2025-01-29T11:25:57.819509640Z" level=info msg="TearDown network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" successfully" Jan 29 11:25:57.820806 containerd[1452]: time="2025-01-29T11:25:57.819527493Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" returns successfully" Jan 29 11:25:57.822048 systemd[1]: run-netns-cni\x2d14c22f93\x2db977\x2d74f9\x2db444\x2def99153b719b.mount: Deactivated successfully. Jan 29 11:25:57.825710 containerd[1452]: time="2025-01-29T11:25:57.825662735Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:25:57.825800 containerd[1452]: time="2025-01-29T11:25:57.825764927Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:25:57.825800 containerd[1452]: time="2025-01-29T11:25:57.825777701Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:25:57.826973 containerd[1452]: time="2025-01-29T11:25:57.826944582Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:25:57.827073 containerd[1452]: time="2025-01-29T11:25:57.827050471Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:25:57.827073 containerd[1452]: time="2025-01-29T11:25:57.827068154Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:25:57.827728 containerd[1452]: time="2025-01-29T11:25:57.827702906Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:25:57.827828 containerd[1452]: time="2025-01-29T11:25:57.827797574Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:25:57.827828 containerd[1452]: time="2025-01-29T11:25:57.827814446Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:25:57.828189 containerd[1452]: time="2025-01-29T11:25:57.828162700Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:57.828255 containerd[1452]: time="2025-01-29T11:25:57.828233953Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:57.828295 containerd[1452]: time="2025-01-29T11:25:57.828251667Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:57.829609 containerd[1452]: time="2025-01-29T11:25:57.829559022Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:57.830086 containerd[1452]: time="2025-01-29T11:25:57.829750661Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:57.830086 containerd[1452]: time="2025-01-29T11:25:57.829769236Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:57.830423 containerd[1452]: time="2025-01-29T11:25:57.830347603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:7,}" Jan 29 11:25:57.920107 containerd[1452]: time="2025-01-29T11:25:57.919974286Z" level=error msg="Failed to destroy network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.920552 containerd[1452]: time="2025-01-29T11:25:57.920419292Z" level=error msg="encountered an error cleaning up failed sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.920552 containerd[1452]: time="2025-01-29T11:25:57.920473203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.920723 kubelet[1840]: E0129 11:25:57.920684 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.920771 kubelet[1840]: E0129 11:25:57.920750 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:57.920801 kubelet[1840]: E0129 11:25:57.920786 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:57.921072 kubelet[1840]: E0129 11:25:57.920831 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4m8f7" podUID="ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8" Jan 29 11:25:57.939091 containerd[1452]: time="2025-01-29T11:25:57.939047712Z" level=error msg="Failed to destroy network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.939496 containerd[1452]: time="2025-01-29T11:25:57.939471057Z" level=error msg="encountered an error cleaning up failed sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.939627 containerd[1452]: time="2025-01-29T11:25:57.939602984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.939947 kubelet[1840]: E0129 11:25:57.939904 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:57.940015 kubelet[1840]: E0129 11:25:57.939970 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:57.940015 kubelet[1840]: E0129 11:25:57.939992 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:57.940072 kubelet[1840]: E0129 11:25:57.940034 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:58.536085 kubelet[1840]: E0129 11:25:58.536026 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:58.738225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d-shm.mount: Deactivated successfully. Jan 29 11:25:58.817688 kubelet[1840]: I0129 11:25:58.817572 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc" Jan 29 11:25:58.818576 containerd[1452]: time="2025-01-29T11:25:58.818191083Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\"" Jan 29 11:25:58.819571 containerd[1452]: time="2025-01-29T11:25:58.819093337Z" level=info msg="Ensure that sandbox 446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc in task-service has been cleanup successfully" Jan 29 11:25:58.819571 containerd[1452]: time="2025-01-29T11:25:58.819346513Z" level=info msg="TearDown network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" successfully" Jan 29 11:25:58.819571 containerd[1452]: time="2025-01-29T11:25:58.819361761Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" returns successfully" Jan 29 11:25:58.822368 containerd[1452]: time="2025-01-29T11:25:58.821451115Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" Jan 29 11:25:58.822368 containerd[1452]: time="2025-01-29T11:25:58.821552685Z" level=info msg="TearDown network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" successfully" Jan 29 11:25:58.822368 containerd[1452]: time="2025-01-29T11:25:58.821566732Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" returns successfully" Jan 29 11:25:58.824280 containerd[1452]: time="2025-01-29T11:25:58.822886871Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:25:58.824280 containerd[1452]: time="2025-01-29T11:25:58.822962392Z" level=info msg="TearDown network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" successfully" Jan 29 11:25:58.824280 containerd[1452]: time="2025-01-29T11:25:58.822974395Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" returns successfully" Jan 29 11:25:58.824280 containerd[1452]: time="2025-01-29T11:25:58.823816125Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:25:58.824280 containerd[1452]: time="2025-01-29T11:25:58.823885876Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:25:58.824280 containerd[1452]: time="2025-01-29T11:25:58.823901505Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:25:58.823134 systemd[1]: run-netns-cni\x2d7615c20e\x2d7696\x2d02f6\x2dd06e\x2d507351af3d27.mount: Deactivated successfully. Jan 29 11:25:58.825888 kubelet[1840]: I0129 11:25:58.825634 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d" Jan 29 11:25:58.825977 containerd[1452]: time="2025-01-29T11:25:58.825548277Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:25:58.826073 containerd[1452]: time="2025-01-29T11:25:58.826010946Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:25:58.826073 containerd[1452]: time="2025-01-29T11:25:58.826030223Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:25:58.826686 containerd[1452]: time="2025-01-29T11:25:58.826525453Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:25:58.826686 containerd[1452]: time="2025-01-29T11:25:58.826617826Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:25:58.826686 containerd[1452]: time="2025-01-29T11:25:58.826632333Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:25:58.826686 containerd[1452]: time="2025-01-29T11:25:58.826645047Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\"" Jan 29 11:25:58.826893 containerd[1452]: time="2025-01-29T11:25:58.826860972Z" level=info msg="Ensure that sandbox 754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d in task-service has been cleanup successfully" Jan 29 11:25:58.827377 containerd[1452]: time="2025-01-29T11:25:58.827234173Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:58.827377 containerd[1452]: time="2025-01-29T11:25:58.827314183Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:58.827377 containerd[1452]: time="2025-01-29T11:25:58.827326236Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:58.827576 containerd[1452]: time="2025-01-29T11:25:58.827472421Z" level=info msg="TearDown network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" successfully" Jan 29 11:25:58.827576 containerd[1452]: time="2025-01-29T11:25:58.827495073Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" returns successfully" Jan 29 11:25:58.829643 containerd[1452]: time="2025-01-29T11:25:58.829368200Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:58.829643 containerd[1452]: time="2025-01-29T11:25:58.829464000Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:58.829643 containerd[1452]: time="2025-01-29T11:25:58.829476964Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:58.829643 containerd[1452]: time="2025-01-29T11:25:58.829532308Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" Jan 29 11:25:58.829643 containerd[1452]: time="2025-01-29T11:25:58.829589014Z" level=info msg="TearDown network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" successfully" Jan 29 11:25:58.829643 containerd[1452]: time="2025-01-29T11:25:58.829599875Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" returns successfully" Jan 29 11:25:58.830059 systemd[1]: run-netns-cni\x2d32eee72c\x2d830a\x2d98d8\x2dc315\x2d7cedd1acb44e.mount: Deactivated successfully. Jan 29 11:25:58.830437 containerd[1452]: time="2025-01-29T11:25:58.830072592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:8,}" Jan 29 11:25:58.833076 containerd[1452]: time="2025-01-29T11:25:58.832736595Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:25:58.833076 containerd[1452]: time="2025-01-29T11:25:58.832845159Z" level=info msg="TearDown network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" successfully" Jan 29 11:25:58.833076 containerd[1452]: time="2025-01-29T11:25:58.832863033Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" returns successfully" Jan 29 11:25:58.834594 containerd[1452]: time="2025-01-29T11:25:58.834225951Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:25:58.834594 containerd[1452]: time="2025-01-29T11:25:58.834332001Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:25:58.834594 containerd[1452]: time="2025-01-29T11:25:58.834345506Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:25:58.835338 containerd[1452]: time="2025-01-29T11:25:58.835315868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:4,}" Jan 29 11:25:58.951871 containerd[1452]: time="2025-01-29T11:25:58.951659424Z" level=error msg="Failed to destroy network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.953256 containerd[1452]: time="2025-01-29T11:25:58.953221007Z" level=error msg="encountered an error cleaning up failed sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.953323 containerd[1452]: time="2025-01-29T11:25:58.953288603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.953649 kubelet[1840]: E0129 11:25:58.953556 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.953649 kubelet[1840]: E0129 11:25:58.953618 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:58.953649 kubelet[1840]: E0129 11:25:58.953640 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:25:58.953839 kubelet[1840]: E0129 11:25:58.953691 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4m8f7" podUID="ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8" Jan 29 11:25:58.973957 containerd[1452]: time="2025-01-29T11:25:58.973214628Z" level=error msg="Failed to destroy network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.973957 containerd[1452]: time="2025-01-29T11:25:58.973657349Z" level=error msg="encountered an error cleaning up failed sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.973957 containerd[1452]: time="2025-01-29T11:25:58.973705579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.974120 kubelet[1840]: E0129 11:25:58.973860 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:25:58.974120 kubelet[1840]: E0129 11:25:58.973905 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:58.974120 kubelet[1840]: E0129 11:25:58.973936 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:25:58.974211 kubelet[1840]: E0129 11:25:58.973975 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:25:59.537077 kubelet[1840]: E0129 11:25:59.537036 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:25:59.738962 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045-shm.mount: Deactivated successfully. Jan 29 11:25:59.739061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c-shm.mount: Deactivated successfully. Jan 29 11:25:59.835646 kubelet[1840]: I0129 11:25:59.835559 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c" Jan 29 11:25:59.837068 containerd[1452]: time="2025-01-29T11:25:59.837028130Z" level=info msg="StopPodSandbox for \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\"" Jan 29 11:25:59.837351 containerd[1452]: time="2025-01-29T11:25:59.837206626Z" level=info msg="Ensure that sandbox cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c in task-service has been cleanup successfully" Jan 29 11:25:59.839094 systemd[1]: run-netns-cni\x2dbf599763\x2d82a4\x2d5abc\x2de2b2\x2d85075901481e.mount: Deactivated successfully. Jan 29 11:25:59.839807 containerd[1452]: time="2025-01-29T11:25:59.839514358Z" level=info msg="TearDown network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" successfully" Jan 29 11:25:59.839807 containerd[1452]: time="2025-01-29T11:25:59.839530028Z" level=info msg="StopPodSandbox for \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" returns successfully" Jan 29 11:25:59.841022 containerd[1452]: time="2025-01-29T11:25:59.840987384Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\"" Jan 29 11:25:59.841078 containerd[1452]: time="2025-01-29T11:25:59.841060922Z" level=info msg="TearDown network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" successfully" Jan 29 11:25:59.841078 containerd[1452]: time="2025-01-29T11:25:59.841073476Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" returns successfully" Jan 29 11:25:59.841755 containerd[1452]: time="2025-01-29T11:25:59.841732984Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" Jan 29 11:25:59.841824 containerd[1452]: time="2025-01-29T11:25:59.841804508Z" level=info msg="TearDown network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" successfully" Jan 29 11:25:59.841824 containerd[1452]: time="2025-01-29T11:25:59.841818765Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" returns successfully" Jan 29 11:25:59.842638 containerd[1452]: time="2025-01-29T11:25:59.842459768Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:25:59.842638 containerd[1452]: time="2025-01-29T11:25:59.842557192Z" level=info msg="TearDown network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" successfully" Jan 29 11:25:59.842638 containerd[1452]: time="2025-01-29T11:25:59.842572411Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" returns successfully" Jan 29 11:25:59.843725 containerd[1452]: time="2025-01-29T11:25:59.843684228Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:25:59.843776 containerd[1452]: time="2025-01-29T11:25:59.843760010Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:25:59.843908 containerd[1452]: time="2025-01-29T11:25:59.843773135Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:25:59.843944 kubelet[1840]: I0129 11:25:59.843917 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045" Jan 29 11:25:59.844816 containerd[1452]: time="2025-01-29T11:25:59.844282651Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:25:59.844816 containerd[1452]: time="2025-01-29T11:25:59.844359585Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:25:59.844816 containerd[1452]: time="2025-01-29T11:25:59.844372399Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:25:59.844816 containerd[1452]: time="2025-01-29T11:25:59.844526069Z" level=info msg="StopPodSandbox for \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\"" Jan 29 11:25:59.844816 containerd[1452]: time="2025-01-29T11:25:59.844712218Z" level=info msg="Ensure that sandbox d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045 in task-service has been cleanup successfully" Jan 29 11:25:59.845031 containerd[1452]: time="2025-01-29T11:25:59.845014446Z" level=info msg="TearDown network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" successfully" Jan 29 11:25:59.845113 containerd[1452]: time="2025-01-29T11:25:59.845097762Z" level=info msg="StopPodSandbox for \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" returns successfully" Jan 29 11:25:59.846764 containerd[1452]: time="2025-01-29T11:25:59.845355415Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:25:59.847225 containerd[1452]: time="2025-01-29T11:25:59.847098027Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:25:59.847434 containerd[1452]: time="2025-01-29T11:25:59.847415773Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:25:59.847541 containerd[1452]: time="2025-01-29T11:25:59.847243250Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\"" Jan 29 11:25:59.847537 systemd[1]: run-netns-cni\x2ddd59a393\x2dca95\x2d912b\x2db4a5\x2d051f52bff703.mount: Deactivated successfully. Jan 29 11:25:59.847788 containerd[1452]: time="2025-01-29T11:25:59.847755712Z" level=info msg="TearDown network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" successfully" Jan 29 11:25:59.850518 containerd[1452]: time="2025-01-29T11:25:59.850496979Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" returns successfully" Jan 29 11:25:59.850927 containerd[1452]: time="2025-01-29T11:25:59.850908251Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:25:59.851146 containerd[1452]: time="2025-01-29T11:25:59.851041431Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:25:59.851146 containerd[1452]: time="2025-01-29T11:25:59.851076657Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:25:59.851604 containerd[1452]: time="2025-01-29T11:25:59.851518838Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" Jan 29 11:25:59.852082 containerd[1452]: time="2025-01-29T11:25:59.851893882Z" level=info msg="TearDown network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" successfully" Jan 29 11:25:59.852082 containerd[1452]: time="2025-01-29T11:25:59.852008807Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" returns successfully" Jan 29 11:25:59.852403 containerd[1452]: time="2025-01-29T11:25:59.852368754Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:25:59.852706 containerd[1452]: time="2025-01-29T11:25:59.852592183Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:25:59.852776 containerd[1452]: time="2025-01-29T11:25:59.852607632Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:25:59.853101 containerd[1452]: time="2025-01-29T11:25:59.853042508Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:25:59.853406 containerd[1452]: time="2025-01-29T11:25:59.853320280Z" level=info msg="TearDown network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" successfully" Jan 29 11:25:59.853406 containerd[1452]: time="2025-01-29T11:25:59.853357129Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" returns successfully" Jan 29 11:25:59.854231 containerd[1452]: time="2025-01-29T11:25:59.853946325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:9,}" Jan 29 11:25:59.854820 containerd[1452]: time="2025-01-29T11:25:59.854780081Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:25:59.854907 containerd[1452]: time="2025-01-29T11:25:59.854883134Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:25:59.854973 containerd[1452]: time="2025-01-29T11:25:59.854905506Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:25:59.856624 containerd[1452]: time="2025-01-29T11:25:59.856593896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:5,}" Jan 29 11:26:00.318731 containerd[1452]: time="2025-01-29T11:26:00.318679042Z" level=error msg="Failed to destroy network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.319422 containerd[1452]: time="2025-01-29T11:26:00.318967643Z" level=error msg="encountered an error cleaning up failed sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.319422 containerd[1452]: time="2025-01-29T11:26:00.319028928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.319548 kubelet[1840]: E0129 11:26:00.319232 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.319548 kubelet[1840]: E0129 11:26:00.319288 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:26:00.319548 kubelet[1840]: E0129 11:26:00.319313 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-4m8f7" Jan 29 11:26:00.319659 kubelet[1840]: E0129 11:26:00.319361 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-4m8f7_default(ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-4m8f7" podUID="ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8" Jan 29 11:26:00.322041 containerd[1452]: time="2025-01-29T11:26:00.322003002Z" level=error msg="Failed to destroy network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.322322 containerd[1452]: time="2025-01-29T11:26:00.322273841Z" level=error msg="encountered an error cleaning up failed sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.322368 containerd[1452]: time="2025-01-29T11:26:00.322342860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.322661 kubelet[1840]: E0129 11:26:00.322554 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:26:00.322661 kubelet[1840]: E0129 11:26:00.322633 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:26:00.322661 kubelet[1840]: E0129 11:26:00.322655 1840 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86gkr" Jan 29 11:26:00.322806 kubelet[1840]: E0129 11:26:00.322699 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86gkr_calico-system(618d298b-3aee-418b-8f1a-093ea40b4ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86gkr" podUID="618d298b-3aee-418b-8f1a-093ea40b4ebb" Jan 29 11:26:00.538575 kubelet[1840]: E0129 11:26:00.538436 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:00.558805 containerd[1452]: time="2025-01-29T11:26:00.558752760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:00.559544 containerd[1452]: time="2025-01-29T11:26:00.559493130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:26:00.560930 containerd[1452]: time="2025-01-29T11:26:00.560887798Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:00.563190 containerd[1452]: time="2025-01-29T11:26:00.563165775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:00.564209 containerd[1452]: time="2025-01-29T11:26:00.563742658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.849035034s" Jan 29 11:26:00.564209 containerd[1452]: time="2025-01-29T11:26:00.563779417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:26:00.570437 containerd[1452]: time="2025-01-29T11:26:00.570305148Z" level=info msg="CreateContainer within sandbox \"a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:26:00.591941 containerd[1452]: time="2025-01-29T11:26:00.591752384Z" level=info msg="CreateContainer within sandbox \"a509e0078c7c7faeefce12d4b41736a6b32bfbec0626823b80f9c49a3c33ceea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"26e11aedb4c113535e01d9cf83270f75910d7d765dc9f63fa21100d86635d3a5\"" Jan 29 11:26:00.593150 containerd[1452]: time="2025-01-29T11:26:00.592770355Z" level=info msg="StartContainer for \"26e11aedb4c113535e01d9cf83270f75910d7d765dc9f63fa21100d86635d3a5\"" Jan 29 11:26:00.647540 systemd[1]: Started cri-containerd-26e11aedb4c113535e01d9cf83270f75910d7d765dc9f63fa21100d86635d3a5.scope - libcontainer container 26e11aedb4c113535e01d9cf83270f75910d7d765dc9f63fa21100d86635d3a5. Jan 29 11:26:00.688511 containerd[1452]: time="2025-01-29T11:26:00.688466276Z" level=info msg="StartContainer for \"26e11aedb4c113535e01d9cf83270f75910d7d765dc9f63fa21100d86635d3a5\" returns successfully" Jan 29 11:26:00.742116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40-shm.mount: Deactivated successfully. Jan 29 11:26:00.742421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046-shm.mount: Deactivated successfully. Jan 29 11:26:00.742593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581211531.mount: Deactivated successfully. Jan 29 11:26:00.762711 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:26:00.762780 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:26:00.869169 kubelet[1840]: I0129 11:26:00.868773 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046" Jan 29 11:26:00.872459 containerd[1452]: time="2025-01-29T11:26:00.871556373Z" level=info msg="StopPodSandbox for \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\"" Jan 29 11:26:00.872459 containerd[1452]: time="2025-01-29T11:26:00.871953718Z" level=info msg="Ensure that sandbox 64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046 in task-service has been cleanup successfully" Jan 29 11:26:00.882440 containerd[1452]: time="2025-01-29T11:26:00.881972178Z" level=info msg="TearDown network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\" successfully" Jan 29 11:26:00.882440 containerd[1452]: time="2025-01-29T11:26:00.882022823Z" level=info msg="StopPodSandbox for \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\" returns successfully" Jan 29 11:26:00.883737 systemd[1]: run-netns-cni\x2d983e1f71\x2de395\x2d7e9b\x2d9210\x2dc4c72960ab95.mount: Deactivated successfully. Jan 29 11:26:00.887349 containerd[1452]: time="2025-01-29T11:26:00.886743185Z" level=info msg="StopPodSandbox for \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\"" Jan 29 11:26:00.887349 containerd[1452]: time="2025-01-29T11:26:00.886920668Z" level=info msg="TearDown network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" successfully" Jan 29 11:26:00.887349 containerd[1452]: time="2025-01-29T11:26:00.886949602Z" level=info msg="StopPodSandbox for \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" returns successfully" Jan 29 11:26:00.889456 containerd[1452]: time="2025-01-29T11:26:00.889265921Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\"" Jan 29 11:26:00.889795 containerd[1452]: time="2025-01-29T11:26:00.889650522Z" level=info msg="TearDown network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" successfully" Jan 29 11:26:00.889795 containerd[1452]: time="2025-01-29T11:26:00.889690118Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" returns successfully" Jan 29 11:26:00.891218 kubelet[1840]: I0129 11:26:00.890721 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n7gxb" podStartSLOduration=3.248887105 podStartE2EDuration="24.890689704s" podCreationTimestamp="2025-01-29 11:25:36 +0000 UTC" firstStartedPulling="2025-01-29 11:25:38.922811275 +0000 UTC m=+2.828329162" lastFinishedPulling="2025-01-29 11:26:00.564613873 +0000 UTC m=+24.470131761" observedRunningTime="2025-01-29 11:26:00.888352336 +0000 UTC m=+24.793870273" watchObservedRunningTime="2025-01-29 11:26:00.890689704 +0000 UTC m=+24.796207632" Jan 29 11:26:00.892517 containerd[1452]: time="2025-01-29T11:26:00.892218854Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" Jan 29 11:26:00.892517 containerd[1452]: time="2025-01-29T11:26:00.892371832Z" level=info msg="TearDown network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" successfully" Jan 29 11:26:00.894051 containerd[1452]: time="2025-01-29T11:26:00.893982716Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" returns successfully" Jan 29 11:26:00.896802 containerd[1452]: time="2025-01-29T11:26:00.896757786Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:26:00.897175 containerd[1452]: time="2025-01-29T11:26:00.897132320Z" level=info msg="TearDown network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" successfully" Jan 29 11:26:00.898760 kubelet[1840]: I0129 11:26:00.897768 1840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40" Jan 29 11:26:00.899152 containerd[1452]: time="2025-01-29T11:26:00.899077131Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" returns successfully" Jan 29 11:26:00.901878 containerd[1452]: time="2025-01-29T11:26:00.899664243Z" level=info msg="StopPodSandbox for \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\"" Jan 29 11:26:00.901878 containerd[1452]: time="2025-01-29T11:26:00.900026763Z" level=info msg="Ensure that sandbox ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40 in task-service has been cleanup successfully" Jan 29 11:26:00.907512 containerd[1452]: time="2025-01-29T11:26:00.905222688Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:26:00.907938 containerd[1452]: time="2025-01-29T11:26:00.907918149Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:26:00.908129 containerd[1452]: time="2025-01-29T11:26:00.908111101Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:26:00.908906 systemd[1]: run-netns-cni\x2d7c078a7f\x2da5ca\x2d2f00\x2d925f\x2d44eae74fa9ef.mount: Deactivated successfully. Jan 29 11:26:00.909804 containerd[1452]: time="2025-01-29T11:26:00.909784342Z" level=info msg="TearDown network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\" successfully" Jan 29 11:26:00.910371 containerd[1452]: time="2025-01-29T11:26:00.910354022Z" level=info msg="StopPodSandbox for \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\" returns successfully" Jan 29 11:26:00.910972 containerd[1452]: time="2025-01-29T11:26:00.910950201Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:26:00.911221 containerd[1452]: time="2025-01-29T11:26:00.911203146Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:26:00.911303 containerd[1452]: time="2025-01-29T11:26:00.911287565Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:26:00.911604 containerd[1452]: time="2025-01-29T11:26:00.911584062Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:26:00.911984 containerd[1452]: time="2025-01-29T11:26:00.911916976Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:26:00.911984 containerd[1452]: time="2025-01-29T11:26:00.911934699Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:26:00.912213 containerd[1452]: time="2025-01-29T11:26:00.911744903Z" level=info msg="StopPodSandbox for \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\"" Jan 29 11:26:00.912530 containerd[1452]: time="2025-01-29T11:26:00.912490202Z" level=info msg="TearDown network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" successfully" Jan 29 11:26:00.912908 containerd[1452]: time="2025-01-29T11:26:00.912731055Z" level=info msg="StopPodSandbox for \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" returns successfully" Jan 29 11:26:00.913214 containerd[1452]: time="2025-01-29T11:26:00.913194154Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\"" Jan 29 11:26:00.913414 containerd[1452]: time="2025-01-29T11:26:00.913332544Z" level=info msg="TearDown network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" successfully" Jan 29 11:26:00.913414 containerd[1452]: time="2025-01-29T11:26:00.913373060Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" returns successfully" Jan 29 11:26:00.913914 containerd[1452]: time="2025-01-29T11:26:00.913801184Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:26:00.914237 containerd[1452]: time="2025-01-29T11:26:00.914163875Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" Jan 29 11:26:00.914551 containerd[1452]: time="2025-01-29T11:26:00.914471923Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:26:00.914551 containerd[1452]: time="2025-01-29T11:26:00.914500326Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:26:00.915417 containerd[1452]: time="2025-01-29T11:26:00.914798366Z" level=info msg="TearDown network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" successfully" Jan 29 11:26:00.915417 containerd[1452]: time="2025-01-29T11:26:00.914818484Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" returns successfully" Jan 29 11:26:00.917339 containerd[1452]: time="2025-01-29T11:26:00.917035676Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:26:00.917339 containerd[1452]: time="2025-01-29T11:26:00.917125675Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:26:00.917339 containerd[1452]: time="2025-01-29T11:26:00.917153066Z" level=info msg="TearDown network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" successfully" Jan 29 11:26:00.917339 containerd[1452]: time="2025-01-29T11:26:00.917166912Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" returns successfully" Jan 29 11:26:00.917339 containerd[1452]: time="2025-01-29T11:26:00.917241463Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:26:00.917339 containerd[1452]: time="2025-01-29T11:26:00.917256572Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:26:00.918725 containerd[1452]: time="2025-01-29T11:26:00.918666528Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:26:00.918800 containerd[1452]: time="2025-01-29T11:26:00.918771815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:10,}" Jan 29 11:26:00.919199 containerd[1452]: time="2025-01-29T11:26:00.919180603Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:26:00.919602 containerd[1452]: time="2025-01-29T11:26:00.919527965Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:26:00.920131 containerd[1452]: time="2025-01-29T11:26:00.920061938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:6,}" Jan 29 11:26:01.337663 systemd-networkd[1358]: calif6b5b99b6f4: Link UP Jan 29 11:26:01.340250 systemd-networkd[1358]: calif6b5b99b6f4: Gained carrier Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.021 [INFO][2887] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.044 [INFO][2887] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0 nginx-deployment-8587fbcb89- default ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8 1361 0 2025-01-29 11:25:55 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.109 nginx-deployment-8587fbcb89-4m8f7 eth0 default [] [] [kns.default ksa.default.default] calif6b5b99b6f4 [] []}} ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.044 [INFO][2887] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.080 [INFO][2900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" HandleID="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Workload="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.167 [INFO][2900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" HandleID="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Workload="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038a690), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.109", "pod":"nginx-deployment-8587fbcb89-4m8f7", "timestamp":"2025-01-29 11:26:01.080060192 +0000 UTC"}, Hostname:"172.24.4.109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.168 [INFO][2900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.168 [INFO][2900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.168 [INFO][2900] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.109' Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.174 [INFO][2900] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.268 [INFO][2900] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.278 [INFO][2900] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.283 [INFO][2900] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.287 [INFO][2900] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.288 [INFO][2900] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.291 [INFO][2900] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.300 [INFO][2900] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.312 [INFO][2900] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.1/26] block=192.168.118.0/26 handle="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.313 [INFO][2900] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.1/26] handle="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" host="172.24.4.109" Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.313 [INFO][2900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:26:01.364159 containerd[1452]: 2025-01-29 11:26:01.313 [INFO][2900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.1/26] IPv6=[] ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" HandleID="k8s-pod-network.55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Workload="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" Jan 29 11:26:01.366151 containerd[1452]: 2025-01-29 11:26:01.319 [INFO][2887] cni-plugin/k8s.go 386: Populated endpoint ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8", ResourceVersion:"1361", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-4m8f7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif6b5b99b6f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:01.366151 containerd[1452]: 2025-01-29 11:26:01.319 [INFO][2887] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.1/32] ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" Jan 29 11:26:01.366151 containerd[1452]: 2025-01-29 11:26:01.319 [INFO][2887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6b5b99b6f4 ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" Jan 29 11:26:01.366151 containerd[1452]: 2025-01-29 11:26:01.339 [INFO][2887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" Jan 29 11:26:01.366151 containerd[1452]: 2025-01-29 11:26:01.341 [INFO][2887] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8", ResourceVersion:"1361", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac", Pod:"nginx-deployment-8587fbcb89-4m8f7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif6b5b99b6f4", MAC:"3a:b2:a6:74:0a:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:01.366151 containerd[1452]: 2025-01-29 11:26:01.361 [INFO][2887] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac" Namespace="default" Pod="nginx-deployment-8587fbcb89-4m8f7" WorkloadEndpoint="172.24.4.109-k8s-nginx--deployment--8587fbcb89--4m8f7-eth0" Jan 29 11:26:01.414981 containerd[1452]: time="2025-01-29T11:26:01.413942448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:26:01.414981 containerd[1452]: time="2025-01-29T11:26:01.414065239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:26:01.414981 containerd[1452]: time="2025-01-29T11:26:01.414173262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:01.416040 containerd[1452]: time="2025-01-29T11:26:01.415785068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:01.424048 systemd-networkd[1358]: calib80f798eb57: Link UP Jan 29 11:26:01.426354 systemd-networkd[1358]: calib80f798eb57: Gained carrier Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.019 [INFO][2872] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.043 [INFO][2872] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.109-k8s-csi--node--driver--86gkr-eth0 csi-node-driver- calico-system 618d298b-3aee-418b-8f1a-093ea40b4ebb 1267 0 2025-01-29 11:25:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.109 csi-node-driver-86gkr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib80f798eb57 [] []}} ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.043 [INFO][2872] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.085 [INFO][2901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" HandleID="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Workload="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.173 [INFO][2901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" HandleID="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Workload="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d70), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.109", "pod":"csi-node-driver-86gkr", "timestamp":"2025-01-29 11:26:01.085543015 +0000 UTC"}, Hostname:"172.24.4.109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.173 [INFO][2901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.313 [INFO][2901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.313 [INFO][2901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.109' Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.320 [INFO][2901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.368 [INFO][2901] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.380 [INFO][2901] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.385 [INFO][2901] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.392 [INFO][2901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.392 [INFO][2901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.396 [INFO][2901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.404 [INFO][2901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.415 [INFO][2901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.2/26] block=192.168.118.0/26 handle="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.415 [INFO][2901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.2/26] handle="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" host="172.24.4.109" Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.415 [INFO][2901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:26:01.453185 containerd[1452]: 2025-01-29 11:26:01.415 [INFO][2901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.2/26] IPv6=[] ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" HandleID="k8s-pod-network.478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Workload="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" Jan 29 11:26:01.453084 systemd[1]: Started cri-containerd-55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac.scope - libcontainer container 55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac. Jan 29 11:26:01.453975 containerd[1452]: 2025-01-29 11:26:01.417 [INFO][2872] cni-plugin/k8s.go 386: Populated endpoint ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-csi--node--driver--86gkr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"618d298b-3aee-418b-8f1a-093ea40b4ebb", ResourceVersion:"1267", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 25, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"", Pod:"csi-node-driver-86gkr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib80f798eb57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:01.453975 containerd[1452]: 2025-01-29 11:26:01.417 [INFO][2872] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.2/32] ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" Jan 29 11:26:01.453975 containerd[1452]: 2025-01-29 11:26:01.417 [INFO][2872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib80f798eb57 ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" Jan 29 11:26:01.453975 containerd[1452]: 2025-01-29 11:26:01.423 [INFO][2872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" Jan 29 11:26:01.453975 containerd[1452]: 2025-01-29 11:26:01.429 [INFO][2872] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-csi--node--driver--86gkr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"618d298b-3aee-418b-8f1a-093ea40b4ebb", ResourceVersion:"1267", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 25, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e", Pod:"csi-node-driver-86gkr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib80f798eb57", MAC:"de:2f:ea:01:18:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:01.453975 containerd[1452]: 2025-01-29 11:26:01.450 [INFO][2872] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e" Namespace="calico-system" Pod="csi-node-driver-86gkr" WorkloadEndpoint="172.24.4.109-k8s-csi--node--driver--86gkr-eth0" Jan 29 11:26:01.482315 containerd[1452]: time="2025-01-29T11:26:01.482111853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:26:01.482952 containerd[1452]: time="2025-01-29T11:26:01.482920801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:26:01.483256 containerd[1452]: time="2025-01-29T11:26:01.483140223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:01.484533 containerd[1452]: time="2025-01-29T11:26:01.484458989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:01.506573 systemd[1]: Started cri-containerd-478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e.scope - libcontainer container 478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e. Jan 29 11:26:01.508124 containerd[1452]: time="2025-01-29T11:26:01.508007467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4m8f7,Uid:ea5c4cb6-6ccb-4d9d-8ce1-844dcadd8be8,Namespace:default,Attempt:6,} returns sandbox id \"55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac\"" Jan 29 11:26:01.510203 containerd[1452]: time="2025-01-29T11:26:01.510010747Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:26:01.529356 containerd[1452]: time="2025-01-29T11:26:01.529302624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86gkr,Uid:618d298b-3aee-418b-8f1a-093ea40b4ebb,Namespace:calico-system,Attempt:10,} returns sandbox id \"478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e\"" Jan 29 11:26:01.539118 kubelet[1840]: E0129 11:26:01.539069 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:02.475657 systemd-networkd[1358]: calif6b5b99b6f4: Gained IPv6LL Jan 29 11:26:02.539245 kubelet[1840]: E0129 11:26:02.539131 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:02.558423 kernel: bpftool[3171]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:26:02.603602 systemd-networkd[1358]: calib80f798eb57: Gained IPv6LL Jan 29 11:26:02.824676 systemd-networkd[1358]: vxlan.calico: Link UP Jan 29 11:26:02.824684 systemd-networkd[1358]: vxlan.calico: Gained carrier Jan 29 11:26:03.540501 kubelet[1840]: E0129 11:26:03.540472 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:04.139634 systemd-networkd[1358]: vxlan.calico: Gained IPv6LL Jan 29 11:26:04.542133 kubelet[1840]: E0129 11:26:04.541932 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:05.542777 kubelet[1840]: E0129 11:26:05.542727 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:05.800969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130014554.mount: Deactivated successfully. Jan 29 11:26:06.543400 kubelet[1840]: E0129 11:26:06.543344 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:07.043478 containerd[1452]: time="2025-01-29T11:26:07.043382658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:07.045718 containerd[1452]: time="2025-01-29T11:26:07.045600610Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 11:26:07.047486 containerd[1452]: time="2025-01-29T11:26:07.047296203Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:07.051638 containerd[1452]: time="2025-01-29T11:26:07.051583909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:07.053120 containerd[1452]: time="2025-01-29T11:26:07.052628339Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.542590151s" Jan 29 11:26:07.053120 containerd[1452]: time="2025-01-29T11:26:07.052661531Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:26:07.055788 containerd[1452]: time="2025-01-29T11:26:07.055519105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:26:07.056876 containerd[1452]: time="2025-01-29T11:26:07.056753691Z" level=info msg="CreateContainer within sandbox \"55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:26:07.083505 containerd[1452]: time="2025-01-29T11:26:07.083318075Z" level=info msg="CreateContainer within sandbox \"55b3e4c3818bd609985586cfb571324d4f695a73a999907bd1103f9a157c1fac\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ce9d812f5a8a9fecb681b1b340840588de53941406991a29fcbaf39384148ec5\"" Jan 29 11:26:07.084652 containerd[1452]: time="2025-01-29T11:26:07.084170856Z" level=info msg="StartContainer for \"ce9d812f5a8a9fecb681b1b340840588de53941406991a29fcbaf39384148ec5\"" Jan 29 11:26:07.089249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610277706.mount: Deactivated successfully. Jan 29 11:26:07.129576 systemd[1]: Started cri-containerd-ce9d812f5a8a9fecb681b1b340840588de53941406991a29fcbaf39384148ec5.scope - libcontainer container ce9d812f5a8a9fecb681b1b340840588de53941406991a29fcbaf39384148ec5. Jan 29 11:26:07.160283 containerd[1452]: time="2025-01-29T11:26:07.160238143Z" level=info msg="StartContainer for \"ce9d812f5a8a9fecb681b1b340840588de53941406991a29fcbaf39384148ec5\" returns successfully" Jan 29 11:26:07.544020 kubelet[1840]: E0129 11:26:07.543928 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:08.544843 kubelet[1840]: E0129 11:26:08.544713 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:08.918652 containerd[1452]: time="2025-01-29T11:26:08.918593834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:08.920047 containerd[1452]: time="2025-01-29T11:26:08.919901127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:26:08.921708 containerd[1452]: time="2025-01-29T11:26:08.921638358Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:08.924348 containerd[1452]: time="2025-01-29T11:26:08.924326152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:08.925226 containerd[1452]: time="2025-01-29T11:26:08.925158704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.869584616s" Jan 29 11:26:08.925226 containerd[1452]: time="2025-01-29T11:26:08.925185284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:26:08.927197 containerd[1452]: time="2025-01-29T11:26:08.927150411Z" level=info msg="CreateContainer within sandbox \"478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:26:08.944505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2386321080.mount: Deactivated successfully. Jan 29 11:26:08.956075 containerd[1452]: time="2025-01-29T11:26:08.956022284Z" level=info msg="CreateContainer within sandbox \"478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ce262ea7216d9271cbb64ac2a00c222b096dae5cc4dede833d4dae88e4248d9d\"" Jan 29 11:26:08.956713 containerd[1452]: time="2025-01-29T11:26:08.956594488Z" level=info msg="StartContainer for \"ce262ea7216d9271cbb64ac2a00c222b096dae5cc4dede833d4dae88e4248d9d\"" Jan 29 11:26:08.991569 systemd[1]: Started cri-containerd-ce262ea7216d9271cbb64ac2a00c222b096dae5cc4dede833d4dae88e4248d9d.scope - libcontainer container ce262ea7216d9271cbb64ac2a00c222b096dae5cc4dede833d4dae88e4248d9d. Jan 29 11:26:09.023107 containerd[1452]: time="2025-01-29T11:26:09.023062052Z" level=info msg="StartContainer for \"ce262ea7216d9271cbb64ac2a00c222b096dae5cc4dede833d4dae88e4248d9d\" returns successfully" Jan 29 11:26:09.024218 containerd[1452]: time="2025-01-29T11:26:09.024188536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:26:09.544956 kubelet[1840]: E0129 11:26:09.544898 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:10.545887 kubelet[1840]: E0129 11:26:10.545836 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:10.919347 containerd[1452]: time="2025-01-29T11:26:10.919297514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:10.920629 containerd[1452]: time="2025-01-29T11:26:10.920584268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:26:10.921978 containerd[1452]: time="2025-01-29T11:26:10.921933891Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:10.924828 containerd[1452]: time="2025-01-29T11:26:10.924779551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:10.925499 containerd[1452]: time="2025-01-29T11:26:10.925378715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.9011585s" Jan 29 11:26:10.925499 containerd[1452]: time="2025-01-29T11:26:10.925426044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:26:10.927343 containerd[1452]: time="2025-01-29T11:26:10.927276196Z" level=info msg="CreateContainer within sandbox \"478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:26:10.944643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84615827.mount: Deactivated successfully. Jan 29 11:26:10.954604 containerd[1452]: time="2025-01-29T11:26:10.954517475Z" level=info msg="CreateContainer within sandbox \"478e404d62f46b8b90870a430e15e41f3e7ae06b60901c36bbdbf81e707d389e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8698555990d4001ad21b9a5eb47fb426dbb87164746b7f169f5232000eb6dffd\"" Jan 29 11:26:10.954981 containerd[1452]: time="2025-01-29T11:26:10.954938324Z" level=info msg="StartContainer for \"8698555990d4001ad21b9a5eb47fb426dbb87164746b7f169f5232000eb6dffd\"" Jan 29 11:26:10.997990 systemd[1]: run-containerd-runc-k8s.io-8698555990d4001ad21b9a5eb47fb426dbb87164746b7f169f5232000eb6dffd-runc.6HOL1u.mount: Deactivated successfully. Jan 29 11:26:11.007548 systemd[1]: Started cri-containerd-8698555990d4001ad21b9a5eb47fb426dbb87164746b7f169f5232000eb6dffd.scope - libcontainer container 8698555990d4001ad21b9a5eb47fb426dbb87164746b7f169f5232000eb6dffd. Jan 29 11:26:11.044178 containerd[1452]: time="2025-01-29T11:26:11.044133366Z" level=info msg="StartContainer for \"8698555990d4001ad21b9a5eb47fb426dbb87164746b7f169f5232000eb6dffd\" returns successfully" Jan 29 11:26:11.546523 kubelet[1840]: E0129 11:26:11.546443 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:11.650838 kubelet[1840]: I0129 11:26:11.650767 1840 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:26:11.650838 kubelet[1840]: I0129 11:26:11.650824 1840 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:26:12.035118 kubelet[1840]: I0129 11:26:12.034995 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-86gkr" podStartSLOduration=26.63968582 podStartE2EDuration="36.034964393s" podCreationTimestamp="2025-01-29 11:25:36 +0000 UTC" firstStartedPulling="2025-01-29 11:26:01.530813651 +0000 UTC m=+25.436331538" lastFinishedPulling="2025-01-29 11:26:10.926092234 +0000 UTC m=+34.831610111" observedRunningTime="2025-01-29 11:26:12.034189959 +0000 UTC m=+35.939707887" watchObservedRunningTime="2025-01-29 11:26:12.034964393 +0000 UTC m=+35.940482320" Jan 29 11:26:12.035505 kubelet[1840]: I0129 11:26:12.035372 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-4m8f7" podStartSLOduration=11.489723046 podStartE2EDuration="17.035352891s" podCreationTimestamp="2025-01-29 11:25:55 +0000 UTC" firstStartedPulling="2025-01-29 11:26:01.509645933 +0000 UTC m=+25.415163810" lastFinishedPulling="2025-01-29 11:26:07.055275768 +0000 UTC m=+30.960793655" observedRunningTime="2025-01-29 11:26:07.966333248 +0000 UTC m=+31.871851175" watchObservedRunningTime="2025-01-29 11:26:12.035352891 +0000 UTC m=+35.940870819" Jan 29 11:26:12.547600 kubelet[1840]: E0129 11:26:12.547488 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:13.548337 kubelet[1840]: E0129 11:26:13.548257 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:14.549537 kubelet[1840]: E0129 11:26:14.549454 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:15.550374 kubelet[1840]: E0129 11:26:15.550227 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:16.518021 kubelet[1840]: E0129 11:26:16.517919 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:16.550581 kubelet[1840]: E0129 11:26:16.550526 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:17.551756 kubelet[1840]: E0129 11:26:17.551674 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:17.625764 systemd[1]: Created slice kubepods-besteffort-podb95850cb_29b7_4638_9d66_d9f0fba51fe0.slice - libcontainer container kubepods-besteffort-podb95850cb_29b7_4638_9d66_d9f0fba51fe0.slice. Jan 29 11:26:17.721708 kubelet[1840]: I0129 11:26:17.721575 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b95850cb-29b7-4638-9d66-d9f0fba51fe0-data\") pod \"nfs-server-provisioner-0\" (UID: \"b95850cb-29b7-4638-9d66-d9f0fba51fe0\") " pod="default/nfs-server-provisioner-0" Jan 29 11:26:17.721708 kubelet[1840]: I0129 11:26:17.721691 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxcfh\" (UniqueName: \"kubernetes.io/projected/b95850cb-29b7-4638-9d66-d9f0fba51fe0-kube-api-access-sxcfh\") pod \"nfs-server-provisioner-0\" (UID: \"b95850cb-29b7-4638-9d66-d9f0fba51fe0\") " pod="default/nfs-server-provisioner-0" Jan 29 11:26:17.933342 containerd[1452]: time="2025-01-29T11:26:17.933206483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b95850cb-29b7-4638-9d66-d9f0fba51fe0,Namespace:default,Attempt:0,}" Jan 29 11:26:18.209044 systemd-networkd[1358]: cali60e51b789ff: Link UP Jan 29 11:26:18.212282 systemd-networkd[1358]: cali60e51b789ff: Gained carrier Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.055 [INFO][3432] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.109-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b95850cb-29b7-4638-9d66-d9f0fba51fe0 1475 0 2025-01-29 11:26:17 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.109 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.056 [INFO][3432] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.120 [INFO][3442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" HandleID="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Workload="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.143 [INFO][3442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" HandleID="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Workload="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002914e0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.109", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 11:26:18.120106319 +0000 UTC"}, Hostname:"172.24.4.109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.143 [INFO][3442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.143 [INFO][3442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.143 [INFO][3442] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.109' Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.148 [INFO][3442] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.155 [INFO][3442] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.165 [INFO][3442] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.170 [INFO][3442] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.175 [INFO][3442] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.176 [INFO][3442] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.179 [INFO][3442] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59 Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.188 [INFO][3442] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.199 [INFO][3442] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.3/26] block=192.168.118.0/26 handle="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.199 [INFO][3442] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.3/26] handle="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" host="172.24.4.109" Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.199 [INFO][3442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:26:18.232685 containerd[1452]: 2025-01-29 11:26:18.199 [INFO][3442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.3/26] IPv6=[] ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" HandleID="k8s-pod-network.24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Workload="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:26:18.235174 containerd[1452]: 2025-01-29 11:26:18.203 [INFO][3432] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b95850cb-29b7-4638-9d66-d9f0fba51fe0", ResourceVersion:"1475", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.118.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:18.235174 containerd[1452]: 2025-01-29 11:26:18.203 [INFO][3432] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.3/32] ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:26:18.235174 containerd[1452]: 2025-01-29 11:26:18.203 [INFO][3432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:26:18.235174 containerd[1452]: 2025-01-29 11:26:18.212 [INFO][3432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:26:18.235696 containerd[1452]: 2025-01-29 11:26:18.213 [INFO][3432] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b95850cb-29b7-4638-9d66-d9f0fba51fe0", ResourceVersion:"1475", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.118.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"06:c0:a1:46:06:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:18.235696 containerd[1452]: 2025-01-29 11:26:18.230 [INFO][3432] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.109-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:26:18.279513 containerd[1452]: time="2025-01-29T11:26:18.279250128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:26:18.279513 containerd[1452]: time="2025-01-29T11:26:18.279309560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:26:18.279513 containerd[1452]: time="2025-01-29T11:26:18.279330810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:18.279513 containerd[1452]: time="2025-01-29T11:26:18.279446277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:18.307034 systemd[1]: Started cri-containerd-24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59.scope - libcontainer container 24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59. Jan 29 11:26:18.345818 containerd[1452]: time="2025-01-29T11:26:18.345775737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b95850cb-29b7-4638-9d66-d9f0fba51fe0,Namespace:default,Attempt:0,} returns sandbox id \"24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59\"" Jan 29 11:26:18.347429 containerd[1452]: time="2025-01-29T11:26:18.347379907Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:26:18.557556 kubelet[1840]: E0129 11:26:18.552255 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:19.435628 systemd-networkd[1358]: cali60e51b789ff: Gained IPv6LL Jan 29 11:26:19.552936 kubelet[1840]: E0129 11:26:19.552870 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:20.334546 systemd[1]: run-containerd-runc-k8s.io-26e11aedb4c113535e01d9cf83270f75910d7d765dc9f63fa21100d86635d3a5-runc.P0YuYk.mount: Deactivated successfully. Jan 29 11:26:20.553951 kubelet[1840]: E0129 11:26:20.553907 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:21.386215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716011304.mount: Deactivated successfully. Jan 29 11:26:21.554798 kubelet[1840]: E0129 11:26:21.554733 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:22.556483 kubelet[1840]: E0129 11:26:22.556445 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:23.556599 kubelet[1840]: E0129 11:26:23.556569 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:23.571111 containerd[1452]: time="2025-01-29T11:26:23.571050107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:23.572424 containerd[1452]: time="2025-01-29T11:26:23.572359764Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 11:26:23.573912 containerd[1452]: time="2025-01-29T11:26:23.573865489Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:23.577433 containerd[1452]: time="2025-01-29T11:26:23.577344585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:23.578610 containerd[1452]: time="2025-01-29T11:26:23.578465538Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.231027422s" Jan 29 11:26:23.578610 containerd[1452]: time="2025-01-29T11:26:23.578504141Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 11:26:23.581241 containerd[1452]: time="2025-01-29T11:26:23.581035910Z" level=info msg="CreateContainer within sandbox \"24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:26:23.606826 containerd[1452]: time="2025-01-29T11:26:23.606790794Z" level=info msg="CreateContainer within sandbox \"24033f7d2d2c8b8f0bf19bd70c4b71d695983c83403063575d07ff6c96100e59\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6897e916f1684fbf712425b6b9e6a66433a1c79ce673744ff7d45cf2956c2f2b\"" Jan 29 11:26:23.607598 containerd[1452]: time="2025-01-29T11:26:23.607503782Z" level=info msg="StartContainer for \"6897e916f1684fbf712425b6b9e6a66433a1c79ce673744ff7d45cf2956c2f2b\"" Jan 29 11:26:23.642542 systemd[1]: Started cri-containerd-6897e916f1684fbf712425b6b9e6a66433a1c79ce673744ff7d45cf2956c2f2b.scope - libcontainer container 6897e916f1684fbf712425b6b9e6a66433a1c79ce673744ff7d45cf2956c2f2b. Jan 29 11:26:23.669888 containerd[1452]: time="2025-01-29T11:26:23.669541981Z" level=info msg="StartContainer for \"6897e916f1684fbf712425b6b9e6a66433a1c79ce673744ff7d45cf2956c2f2b\" returns successfully" Jan 29 11:26:24.232464 kubelet[1840]: I0129 11:26:24.231841 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.999012281 podStartE2EDuration="7.231806847s" podCreationTimestamp="2025-01-29 11:26:17 +0000 UTC" firstStartedPulling="2025-01-29 11:26:18.346933019 +0000 UTC m=+42.252450896" lastFinishedPulling="2025-01-29 11:26:23.579727575 +0000 UTC m=+47.485245462" observedRunningTime="2025-01-29 11:26:24.229835359 +0000 UTC m=+48.135353306" watchObservedRunningTime="2025-01-29 11:26:24.231806847 +0000 UTC m=+48.137324774" Jan 29 11:26:24.557848 kubelet[1840]: E0129 11:26:24.557604 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:25.559334 kubelet[1840]: E0129 11:26:25.559238 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:26.560134 kubelet[1840]: E0129 11:26:26.559999 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:27.572694 kubelet[1840]: E0129 11:26:27.566762 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:28.572281 kubelet[1840]: E0129 11:26:28.572170 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:29.573298 kubelet[1840]: E0129 11:26:29.573236 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:30.574383 kubelet[1840]: E0129 11:26:30.574211 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:31.575518 kubelet[1840]: E0129 11:26:31.575444 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:32.576213 kubelet[1840]: E0129 11:26:32.576106 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:33.577195 kubelet[1840]: E0129 11:26:33.577090 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:34.578562 kubelet[1840]: E0129 11:26:34.578329 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:35.579621 kubelet[1840]: E0129 11:26:35.579535 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:36.517159 kubelet[1840]: E0129 11:26:36.517080 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:36.566440 containerd[1452]: time="2025-01-29T11:26:36.566320661Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:26:36.567727 containerd[1452]: time="2025-01-29T11:26:36.566574066Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:26:36.567727 containerd[1452]: time="2025-01-29T11:26:36.566669946Z" level=info msg="StopPodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:26:36.567727 containerd[1452]: time="2025-01-29T11:26:36.567333101Z" level=info msg="RemovePodSandbox for \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:26:36.567727 containerd[1452]: time="2025-01-29T11:26:36.567381261Z" level=info msg="Forcibly stopping sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\"" Jan 29 11:26:36.567727 containerd[1452]: time="2025-01-29T11:26:36.567575886Z" level=info msg="TearDown network for sandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" successfully" Jan 29 11:26:36.573658 containerd[1452]: time="2025-01-29T11:26:36.572985323Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.573658 containerd[1452]: time="2025-01-29T11:26:36.573088086Z" level=info msg="RemovePodSandbox \"6c6a2a8c27c4bdc4309bdf428d6a357cdc64c57a42cceeceb85845bf7b3ed484\" returns successfully" Jan 29 11:26:36.574642 containerd[1452]: time="2025-01-29T11:26:36.574274100Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:26:36.574642 containerd[1452]: time="2025-01-29T11:26:36.574502979Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:26:36.574642 containerd[1452]: time="2025-01-29T11:26:36.574532895Z" level=info msg="StopPodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:26:36.576839 containerd[1452]: time="2025-01-29T11:26:36.575994476Z" level=info msg="RemovePodSandbox for \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:26:36.576839 containerd[1452]: time="2025-01-29T11:26:36.576051223Z" level=info msg="Forcibly stopping sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\"" Jan 29 11:26:36.576839 containerd[1452]: time="2025-01-29T11:26:36.576184042Z" level=info msg="TearDown network for sandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" successfully" Jan 29 11:26:36.580663 kubelet[1840]: E0129 11:26:36.580593 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:36.581661 containerd[1452]: time="2025-01-29T11:26:36.581354630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.581661 containerd[1452]: time="2025-01-29T11:26:36.581495465Z" level=info msg="RemovePodSandbox \"4efda4f1d2c49a64594b581a1de82a17de70c9aa8c9cdac3b48358a85647b734\" returns successfully" Jan 29 11:26:36.583477 containerd[1452]: time="2025-01-29T11:26:36.583054179Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:26:36.583477 containerd[1452]: time="2025-01-29T11:26:36.583255176Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:26:36.583477 containerd[1452]: time="2025-01-29T11:26:36.583281305Z" level=info msg="StopPodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:26:36.584170 containerd[1452]: time="2025-01-29T11:26:36.584122382Z" level=info msg="RemovePodSandbox for \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:26:36.585188 containerd[1452]: time="2025-01-29T11:26:36.584304474Z" level=info msg="Forcibly stopping sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\"" Jan 29 11:26:36.585188 containerd[1452]: time="2025-01-29T11:26:36.584534666Z" level=info msg="TearDown network for sandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" successfully" Jan 29 11:26:36.593729 containerd[1452]: time="2025-01-29T11:26:36.593668388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.594118 containerd[1452]: time="2025-01-29T11:26:36.594073808Z" level=info msg="RemovePodSandbox \"7e2af7f536bb3fce3ac3fb1cc2cf11cc8bc543af58b6fe901038e38ab348392c\" returns successfully" Jan 29 11:26:36.595464 containerd[1452]: time="2025-01-29T11:26:36.595102327Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:26:36.595464 containerd[1452]: time="2025-01-29T11:26:36.595287715Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:26:36.595464 containerd[1452]: time="2025-01-29T11:26:36.595315627Z" level=info msg="StopPodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:26:36.596188 containerd[1452]: time="2025-01-29T11:26:36.596143891Z" level=info msg="RemovePodSandbox for \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:26:36.596531 containerd[1452]: time="2025-01-29T11:26:36.596494188Z" level=info msg="Forcibly stopping sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\"" Jan 29 11:26:36.597236 containerd[1452]: time="2025-01-29T11:26:36.596780575Z" level=info msg="TearDown network for sandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" successfully" Jan 29 11:26:36.601781 containerd[1452]: time="2025-01-29T11:26:36.601694572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.601899 containerd[1452]: time="2025-01-29T11:26:36.601790362Z" level=info msg="RemovePodSandbox \"86371f40bda4beb6428f23920f6a2340bd490bcca7a751c31cb8842c69611237\" returns successfully" Jan 29 11:26:36.603091 containerd[1452]: time="2025-01-29T11:26:36.602651116Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:26:36.603091 containerd[1452]: time="2025-01-29T11:26:36.602927314Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:26:36.603091 containerd[1452]: time="2025-01-29T11:26:36.602961648Z" level=info msg="StopPodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:26:36.604219 containerd[1452]: time="2025-01-29T11:26:36.603928732Z" level=info msg="RemovePodSandbox for \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:26:36.604304 containerd[1452]: time="2025-01-29T11:26:36.604254313Z" level=info msg="Forcibly stopping sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\"" Jan 29 11:26:36.604547 containerd[1452]: time="2025-01-29T11:26:36.604446814Z" level=info msg="TearDown network for sandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" successfully" Jan 29 11:26:36.609223 containerd[1452]: time="2025-01-29T11:26:36.609152570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.609355 containerd[1452]: time="2025-01-29T11:26:36.609239233Z" level=info msg="RemovePodSandbox \"0b1d1538fd7d56fadf2a086eb3c26ec0696e437b0f7d0c076b649f3c0e43dd1e\" returns successfully" Jan 29 11:26:36.610452 containerd[1452]: time="2025-01-29T11:26:36.610262994Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:26:36.610591 containerd[1452]: time="2025-01-29T11:26:36.610492164Z" level=info msg="TearDown network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" successfully" Jan 29 11:26:36.610591 containerd[1452]: time="2025-01-29T11:26:36.610523212Z" level=info msg="StopPodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" returns successfully" Jan 29 11:26:36.613004 containerd[1452]: time="2025-01-29T11:26:36.611208918Z" level=info msg="RemovePodSandbox for \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:26:36.613004 containerd[1452]: time="2025-01-29T11:26:36.611268660Z" level=info msg="Forcibly stopping sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\"" Jan 29 11:26:36.613004 containerd[1452]: time="2025-01-29T11:26:36.611449920Z" level=info msg="TearDown network for sandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" successfully" Jan 29 11:26:36.616200 containerd[1452]: time="2025-01-29T11:26:36.615992541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.616200 containerd[1452]: time="2025-01-29T11:26:36.616070056Z" level=info msg="RemovePodSandbox \"5bf94320a4e8dca5e6598fb9482d4fdd4ac95d82c79e759a9d9342e956845bc4\" returns successfully" Jan 29 11:26:36.616867 containerd[1452]: time="2025-01-29T11:26:36.616810866Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" Jan 29 11:26:36.617338 containerd[1452]: time="2025-01-29T11:26:36.616989210Z" level=info msg="TearDown network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" successfully" Jan 29 11:26:36.617338 containerd[1452]: time="2025-01-29T11:26:36.617026450Z" level=info msg="StopPodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" returns successfully" Jan 29 11:26:36.617806 containerd[1452]: time="2025-01-29T11:26:36.617571552Z" level=info msg="RemovePodSandbox for \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" Jan 29 11:26:36.617806 containerd[1452]: time="2025-01-29T11:26:36.617696687Z" level=info msg="Forcibly stopping sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\"" Jan 29 11:26:36.618546 containerd[1452]: time="2025-01-29T11:26:36.617912542Z" level=info msg="TearDown network for sandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" successfully" Jan 29 11:26:36.623742 containerd[1452]: time="2025-01-29T11:26:36.623644984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.624157 containerd[1452]: time="2025-01-29T11:26:36.623756082Z" level=info msg="RemovePodSandbox \"96d090a402a84d4b488586f767be67ea3a30f832652c79215743163113351d46\" returns successfully" Jan 29 11:26:36.624548 containerd[1452]: time="2025-01-29T11:26:36.624423354Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\"" Jan 29 11:26:36.624662 containerd[1452]: time="2025-01-29T11:26:36.624623319Z" level=info msg="TearDown network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" successfully" Jan 29 11:26:36.624662 containerd[1452]: time="2025-01-29T11:26:36.624651432Z" level=info msg="StopPodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" returns successfully" Jan 29 11:26:36.625513 containerd[1452]: time="2025-01-29T11:26:36.625443407Z" level=info msg="RemovePodSandbox for \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\"" Jan 29 11:26:36.625513 containerd[1452]: time="2025-01-29T11:26:36.625503279Z" level=info msg="Forcibly stopping sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\"" Jan 29 11:26:36.625822 containerd[1452]: time="2025-01-29T11:26:36.625641318Z" level=info msg="TearDown network for sandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" successfully" Jan 29 11:26:36.630625 containerd[1452]: time="2025-01-29T11:26:36.630536961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.630625 containerd[1452]: time="2025-01-29T11:26:36.630615939Z" level=info msg="RemovePodSandbox \"446c053affec4db053ee6bfbc03446ce1b9d445881382ce630cebbd64f66afdc\" returns successfully" Jan 29 11:26:36.631657 containerd[1452]: time="2025-01-29T11:26:36.631379272Z" level=info msg="StopPodSandbox for \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\"" Jan 29 11:26:36.631848 containerd[1452]: time="2025-01-29T11:26:36.631579647Z" level=info msg="TearDown network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" successfully" Jan 29 11:26:36.632219 containerd[1452]: time="2025-01-29T11:26:36.632073393Z" level=info msg="StopPodSandbox for \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" returns successfully" Jan 29 11:26:36.633083 containerd[1452]: time="2025-01-29T11:26:36.633014759Z" level=info msg="RemovePodSandbox for \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\"" Jan 29 11:26:36.633083 containerd[1452]: time="2025-01-29T11:26:36.633061907Z" level=info msg="Forcibly stopping sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\"" Jan 29 11:26:36.633268 containerd[1452]: time="2025-01-29T11:26:36.633191921Z" level=info msg="TearDown network for sandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" successfully" Jan 29 11:26:36.638075 containerd[1452]: time="2025-01-29T11:26:36.637989310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.638293 containerd[1452]: time="2025-01-29T11:26:36.638087374Z" level=info msg="RemovePodSandbox \"cf2f046e39038ca6d7c49346229007cf523c1db0e9afc439feff5c368890ac3c\" returns successfully" Jan 29 11:26:36.639012 containerd[1452]: time="2025-01-29T11:26:36.638918372Z" level=info msg="StopPodSandbox for \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\"" Jan 29 11:26:36.639182 containerd[1452]: time="2025-01-29T11:26:36.639101476Z" level=info msg="TearDown network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\" successfully" Jan 29 11:26:36.639182 containerd[1452]: time="2025-01-29T11:26:36.639128526Z" level=info msg="StopPodSandbox for \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\" returns successfully" Jan 29 11:26:36.640263 containerd[1452]: time="2025-01-29T11:26:36.640186631Z" level=info msg="RemovePodSandbox for \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\"" Jan 29 11:26:36.640263 containerd[1452]: time="2025-01-29T11:26:36.640244830Z" level=info msg="Forcibly stopping sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\"" Jan 29 11:26:36.640529 containerd[1452]: time="2025-01-29T11:26:36.640373271Z" level=info msg="TearDown network for sandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\" successfully" Jan 29 11:26:36.645336 containerd[1452]: time="2025-01-29T11:26:36.645248575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.645336 containerd[1452]: time="2025-01-29T11:26:36.645336140Z" level=info msg="RemovePodSandbox \"64892c15be914366637185257e555b816fac43787177119c0fba507b322e5046\" returns successfully" Jan 29 11:26:36.646321 containerd[1452]: time="2025-01-29T11:26:36.645888145Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:26:36.646321 containerd[1452]: time="2025-01-29T11:26:36.646055278Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:26:36.646321 containerd[1452]: time="2025-01-29T11:26:36.646084533Z" level=info msg="StopPodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:26:36.646853 containerd[1452]: time="2025-01-29T11:26:36.646807360Z" level=info msg="RemovePodSandbox for \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:26:36.647544 containerd[1452]: time="2025-01-29T11:26:36.646982008Z" level=info msg="Forcibly stopping sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\"" Jan 29 11:26:36.647544 containerd[1452]: time="2025-01-29T11:26:36.647121439Z" level=info msg="TearDown network for sandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" successfully" Jan 29 11:26:36.651550 containerd[1452]: time="2025-01-29T11:26:36.651448464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.651550 containerd[1452]: time="2025-01-29T11:26:36.651530408Z" level=info msg="RemovePodSandbox \"d43f61bfb0fe4d7da28c4dec51b953908bf04f9ec197732e97c54022d5b6daca\" returns successfully" Jan 29 11:26:36.653350 containerd[1452]: time="2025-01-29T11:26:36.652646903Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:26:36.653350 containerd[1452]: time="2025-01-29T11:26:36.652877806Z" level=info msg="TearDown network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" successfully" Jan 29 11:26:36.653350 containerd[1452]: time="2025-01-29T11:26:36.652915286Z" level=info msg="StopPodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" returns successfully" Jan 29 11:26:36.654378 containerd[1452]: time="2025-01-29T11:26:36.653885386Z" level=info msg="RemovePodSandbox for \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:26:36.654378 containerd[1452]: time="2025-01-29T11:26:36.653968311Z" level=info msg="Forcibly stopping sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\"" Jan 29 11:26:36.654378 containerd[1452]: time="2025-01-29T11:26:36.654104557Z" level=info msg="TearDown network for sandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" successfully" Jan 29 11:26:36.660872 containerd[1452]: time="2025-01-29T11:26:36.660776150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.661024 containerd[1452]: time="2025-01-29T11:26:36.660882350Z" level=info msg="RemovePodSandbox \"a2346037cd9e019ec9622195330ad01590799c496e9111a1b4848cb2b058fc9b\" returns successfully" Jan 29 11:26:36.663121 containerd[1452]: time="2025-01-29T11:26:36.663044345Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" Jan 29 11:26:36.663284 containerd[1452]: time="2025-01-29T11:26:36.663228992Z" level=info msg="TearDown network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" successfully" Jan 29 11:26:36.663284 containerd[1452]: time="2025-01-29T11:26:36.663266733Z" level=info msg="StopPodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" returns successfully" Jan 29 11:26:36.664108 containerd[1452]: time="2025-01-29T11:26:36.664026297Z" level=info msg="RemovePodSandbox for \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" Jan 29 11:26:36.664108 containerd[1452]: time="2025-01-29T11:26:36.664088764Z" level=info msg="Forcibly stopping sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\"" Jan 29 11:26:36.664315 containerd[1452]: time="2025-01-29T11:26:36.664222625Z" level=info msg="TearDown network for sandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" successfully" Jan 29 11:26:36.670489 containerd[1452]: time="2025-01-29T11:26:36.670380896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.670740 containerd[1452]: time="2025-01-29T11:26:36.670699534Z" level=info msg="RemovePodSandbox \"c99c2ab216ba96935d83a55351d82204c3033e1ba9f89ff5a542729b72dbfc2e\" returns successfully" Jan 29 11:26:36.671644 containerd[1452]: time="2025-01-29T11:26:36.671596235Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\"" Jan 29 11:26:36.672041 containerd[1452]: time="2025-01-29T11:26:36.671998259Z" level=info msg="TearDown network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" successfully" Jan 29 11:26:36.672314 containerd[1452]: time="2025-01-29T11:26:36.672157548Z" level=info msg="StopPodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" returns successfully" Jan 29 11:26:36.673037 containerd[1452]: time="2025-01-29T11:26:36.672958480Z" level=info msg="RemovePodSandbox for \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\"" Jan 29 11:26:36.673162 containerd[1452]: time="2025-01-29T11:26:36.673030135Z" level=info msg="Forcibly stopping sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\"" Jan 29 11:26:36.673267 containerd[1452]: time="2025-01-29T11:26:36.673184815Z" level=info msg="TearDown network for sandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" successfully" Jan 29 11:26:36.678407 containerd[1452]: time="2025-01-29T11:26:36.678304798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.678532 containerd[1452]: time="2025-01-29T11:26:36.678445041Z" level=info msg="RemovePodSandbox \"754b11e0e57be76c9386b44b5ecb0d0301a0a92f038b8732dc93ae3a71dd1b5d\" returns successfully" Jan 29 11:26:36.679490 containerd[1452]: time="2025-01-29T11:26:36.679213814Z" level=info msg="StopPodSandbox for \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\"" Jan 29 11:26:36.679490 containerd[1452]: time="2025-01-29T11:26:36.679444266Z" level=info msg="TearDown network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" successfully" Jan 29 11:26:36.679490 containerd[1452]: time="2025-01-29T11:26:36.679477258Z" level=info msg="StopPodSandbox for \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" returns successfully" Jan 29 11:26:36.680086 containerd[1452]: time="2025-01-29T11:26:36.680009827Z" level=info msg="RemovePodSandbox for \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\"" Jan 29 11:26:36.680086 containerd[1452]: time="2025-01-29T11:26:36.680073857Z" level=info msg="Forcibly stopping sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\"" Jan 29 11:26:36.680325 containerd[1452]: time="2025-01-29T11:26:36.680204212Z" level=info msg="TearDown network for sandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" successfully" Jan 29 11:26:36.685012 containerd[1452]: time="2025-01-29T11:26:36.684918334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.685012 containerd[1452]: time="2025-01-29T11:26:36.685001320Z" level=info msg="RemovePodSandbox \"d8f23b99dbe7a6603696029dfce891737ff0553fd7e492e56d5abdab70be9045\" returns successfully" Jan 29 11:26:36.686547 containerd[1452]: time="2025-01-29T11:26:36.685891028Z" level=info msg="StopPodSandbox for \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\"" Jan 29 11:26:36.686547 containerd[1452]: time="2025-01-29T11:26:36.686126971Z" level=info msg="TearDown network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\" successfully" Jan 29 11:26:36.686547 containerd[1452]: time="2025-01-29T11:26:36.686157508Z" level=info msg="StopPodSandbox for \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\" returns successfully" Jan 29 11:26:36.686818 containerd[1452]: time="2025-01-29T11:26:36.686751011Z" level=info msg="RemovePodSandbox for \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\"" Jan 29 11:26:36.686910 containerd[1452]: time="2025-01-29T11:26:36.686818939Z" level=info msg="Forcibly stopping sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\"" Jan 29 11:26:36.687104 containerd[1452]: time="2025-01-29T11:26:36.687006260Z" level=info msg="TearDown network for sandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\" successfully" Jan 29 11:26:36.692907 containerd[1452]: time="2025-01-29T11:26:36.692833490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:36.693063 containerd[1452]: time="2025-01-29T11:26:36.692945219Z" level=info msg="RemovePodSandbox \"ae80fc57d75c81eea011211f3dba755c951fc70a9bd407dde122d92bf6a3df40\" returns successfully" Jan 29 11:26:37.581519 kubelet[1840]: E0129 11:26:37.581341 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:38.582248 kubelet[1840]: E0129 11:26:38.582138 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:39.583220 kubelet[1840]: E0129 11:26:39.583164 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:40.583633 kubelet[1840]: E0129 11:26:40.583531 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:41.584491 kubelet[1840]: E0129 11:26:41.584361 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:42.584704 kubelet[1840]: E0129 11:26:42.584628 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:43.585964 kubelet[1840]: E0129 11:26:43.585877 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:44.586924 kubelet[1840]: E0129 11:26:44.586862 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:45.587716 kubelet[1840]: E0129 11:26:45.587609 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:46.588439 kubelet[1840]: E0129 11:26:46.588231 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:47.588616 kubelet[1840]: E0129 11:26:47.588531 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:48.373262 systemd[1]: Created slice kubepods-besteffort-pod7e1d8590_4093_4d9d_a760_b633326bdfd1.slice - libcontainer container kubepods-besteffort-pod7e1d8590_4093_4d9d_a760_b633326bdfd1.slice. Jan 29 11:26:48.526821 kubelet[1840]: I0129 11:26:48.526689 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljfr\" (UniqueName: \"kubernetes.io/projected/7e1d8590-4093-4d9d-a760-b633326bdfd1-kube-api-access-cljfr\") pod \"test-pod-1\" (UID: \"7e1d8590-4093-4d9d-a760-b633326bdfd1\") " pod="default/test-pod-1" Jan 29 11:26:48.526821 kubelet[1840]: I0129 11:26:48.526810 1840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01531f66-58f9-40ad-bb40-35817105fec5\" (UniqueName: \"kubernetes.io/nfs/7e1d8590-4093-4d9d-a760-b633326bdfd1-pvc-01531f66-58f9-40ad-bb40-35817105fec5\") pod \"test-pod-1\" (UID: \"7e1d8590-4093-4d9d-a760-b633326bdfd1\") " pod="default/test-pod-1" Jan 29 11:26:48.589706 kubelet[1840]: E0129 11:26:48.589609 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:48.697576 kernel: FS-Cache: Loaded Jan 29 11:26:48.786847 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:26:48.787027 kernel: RPC: Registered udp transport module. Jan 29 11:26:48.787103 kernel: RPC: Registered tcp transport module. Jan 29 11:26:48.788278 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:26:48.788436 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:26:49.095309 kernel: NFS: Registering the id_resolver key type Jan 29 11:26:49.095488 kernel: Key type id_resolver registered Jan 29 11:26:49.097464 kernel: Key type id_legacy registered Jan 29 11:26:49.143546 nfsidmap[3669]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 11:26:49.153667 nfsidmap[3670]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 11:26:49.282592 containerd[1452]: time="2025-01-29T11:26:49.282080112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7e1d8590-4093-4d9d-a760-b633326bdfd1,Namespace:default,Attempt:0,}" Jan 29 11:26:49.527060 systemd-networkd[1358]: cali5ec59c6bf6e: Link UP Jan 29 11:26:49.528666 systemd-networkd[1358]: cali5ec59c6bf6e: Gained carrier Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.398 [INFO][3672] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.109-k8s-test--pod--1-eth0 default 7e1d8590-4093-4d9d-a760-b633326bdfd1 1579 0 2025-01-29 11:26:19 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.109 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.399 [INFO][3672] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-eth0" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.429 [INFO][3682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" HandleID="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Workload="172.24.4.109-k8s-test--pod--1-eth0" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.453 [INFO][3682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" HandleID="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Workload="172.24.4.109-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030ea90), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.109", "pod":"test-pod-1", "timestamp":"2025-01-29 11:26:49.429181094 +0000 UTC"}, Hostname:"172.24.4.109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.453 [INFO][3682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.453 [INFO][3682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.453 [INFO][3682] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.109' Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.458 [INFO][3682] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.468 [INFO][3682] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.477 [INFO][3682] ipam/ipam.go 489: Trying affinity for 192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.482 [INFO][3682] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.487 [INFO][3682] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.488 [INFO][3682] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.491 [INFO][3682] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.499 [INFO][3682] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.516 [INFO][3682] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.4/26] block=192.168.118.0/26 handle="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.516 [INFO][3682] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.4/26] handle="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" host="172.24.4.109" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.516 [INFO][3682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.516 [INFO][3682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.4/26] IPv6=[] ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" HandleID="k8s-pod-network.cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Workload="172.24.4.109-k8s-test--pod--1-eth0" Jan 29 11:26:49.551738 containerd[1452]: 2025-01-29 11:26:49.519 [INFO][3672] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7e1d8590-4093-4d9d-a760-b633326bdfd1", ResourceVersion:"1579", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:49.555596 containerd[1452]: 2025-01-29 11:26:49.520 [INFO][3672] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.4/32] ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-eth0" Jan 29 11:26:49.555596 containerd[1452]: 2025-01-29 11:26:49.520 [INFO][3672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-eth0" Jan 29 11:26:49.555596 containerd[1452]: 2025-01-29 11:26:49.527 [INFO][3672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-eth0" Jan 29 11:26:49.555596 containerd[1452]: 2025-01-29 11:26:49.529 [INFO][3672] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.109-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7e1d8590-4093-4d9d-a760-b633326bdfd1", ResourceVersion:"1579", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.109", ContainerID:"cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"fa:4d:c7:91:2e:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:26:49.555596 containerd[1452]: 2025-01-29 11:26:49.543 [INFO][3672] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.109-k8s-test--pod--1-eth0" Jan 29 11:26:49.589970 kubelet[1840]: E0129 11:26:49.589844 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:49.596971 containerd[1452]: time="2025-01-29T11:26:49.596853217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:26:49.597736 containerd[1452]: time="2025-01-29T11:26:49.596929953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:26:49.597736 containerd[1452]: time="2025-01-29T11:26:49.596968376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:49.597736 containerd[1452]: time="2025-01-29T11:26:49.597123089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:49.623597 systemd[1]: Started cri-containerd-cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb.scope - libcontainer container cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb. Jan 29 11:26:49.666933 containerd[1452]: time="2025-01-29T11:26:49.666872754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7e1d8590-4093-4d9d-a760-b633326bdfd1,Namespace:default,Attempt:0,} returns sandbox id \"cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb\"" Jan 29 11:26:49.668822 containerd[1452]: time="2025-01-29T11:26:49.668622502Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:26:50.096037 containerd[1452]: time="2025-01-29T11:26:50.095906550Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:26:50.097999 containerd[1452]: time="2025-01-29T11:26:50.097897725Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:26:50.106108 containerd[1452]: time="2025-01-29T11:26:50.106027733Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 437.301374ms" Jan 29 11:26:50.106108 containerd[1452]: time="2025-01-29T11:26:50.106102225Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:26:50.110610 containerd[1452]: time="2025-01-29T11:26:50.110526490Z" level=info msg="CreateContainer within sandbox \"cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:26:50.139424 containerd[1452]: time="2025-01-29T11:26:50.139180178Z" level=info msg="CreateContainer within sandbox \"cdf3732c59802fc692e78841dbb4027f72b36efc2666f3141d4ffa8fb4c56cfb\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7b25d60eeb926a5f7eff69340de6f14283bf9c227a73320a41fb8f5412c549a7\"" Jan 29 11:26:50.142447 containerd[1452]: time="2025-01-29T11:26:50.141130464Z" level=info msg="StartContainer for \"7b25d60eeb926a5f7eff69340de6f14283bf9c227a73320a41fb8f5412c549a7\"" Jan 29 11:26:50.217828 systemd[1]: Started cri-containerd-7b25d60eeb926a5f7eff69340de6f14283bf9c227a73320a41fb8f5412c549a7.scope - libcontainer container 7b25d60eeb926a5f7eff69340de6f14283bf9c227a73320a41fb8f5412c549a7. Jan 29 11:26:50.259380 containerd[1452]: time="2025-01-29T11:26:50.259327093Z" level=info msg="StartContainer for \"7b25d60eeb926a5f7eff69340de6f14283bf9c227a73320a41fb8f5412c549a7\" returns successfully" Jan 29 11:26:50.313804 kubelet[1840]: I0129 11:26:50.313750 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=30.874142675 podStartE2EDuration="31.313729922s" podCreationTimestamp="2025-01-29 11:26:19 +0000 UTC" firstStartedPulling="2025-01-29 11:26:49.668323124 +0000 UTC m=+73.573841001" lastFinishedPulling="2025-01-29 11:26:50.107910321 +0000 UTC m=+74.013428248" observedRunningTime="2025-01-29 11:26:50.313489487 +0000 UTC m=+74.219007404" watchObservedRunningTime="2025-01-29 11:26:50.313729922 +0000 UTC m=+74.219247809" Jan 29 11:26:50.590217 kubelet[1840]: E0129 11:26:50.590090 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:51.499766 systemd-networkd[1358]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 11:26:51.591076 kubelet[1840]: E0129 11:26:51.590981 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:52.591615 kubelet[1840]: E0129 11:26:52.591532 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:53.592304 kubelet[1840]: E0129 11:26:53.592150 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:54.593170 kubelet[1840]: E0129 11:26:54.593074 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:55.594176 kubelet[1840]: E0129 11:26:55.594071 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:56.518045 kubelet[1840]: E0129 11:26:56.517970 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:56.594643 kubelet[1840]: E0129 11:26:56.594556 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:57.595802 kubelet[1840]: E0129 11:26:57.595719 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:58.596809 kubelet[1840]: E0129 11:26:58.596677 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:26:59.597704 kubelet[1840]: E0129 11:26:59.597610 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:00.598456 kubelet[1840]: E0129 11:27:00.598348 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:01.599195 kubelet[1840]: E0129 11:27:01.599131 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:02.600280 kubelet[1840]: E0129 11:27:02.600046 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:03.600905 kubelet[1840]: E0129 11:27:03.600822 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:04.601666 kubelet[1840]: E0129 11:27:04.601490 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:05.602130 kubelet[1840]: E0129 11:27:05.602021 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:06.603249 kubelet[1840]: E0129 11:27:06.603174 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:07.604492 kubelet[1840]: E0129 11:27:07.604380 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:08.604961 kubelet[1840]: E0129 11:27:08.604852 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:09.605568 kubelet[1840]: E0129 11:27:09.605455 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:10.606771 kubelet[1840]: E0129 11:27:10.606690 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:11.607947 kubelet[1840]: E0129 11:27:11.607828 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:12.608732 kubelet[1840]: E0129 11:27:12.608658 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:13.609368 kubelet[1840]: E0129 11:27:13.609289 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:14.610456 kubelet[1840]: E0129 11:27:14.610311 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:15.611132 kubelet[1840]: E0129 11:27:15.611047 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:16.517593 kubelet[1840]: E0129 11:27:16.517505 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:16.611866 kubelet[1840]: E0129 11:27:16.611800 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:17.612084 kubelet[1840]: E0129 11:27:17.611982 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:18.612669 kubelet[1840]: E0129 11:27:18.612588 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:19.613712 kubelet[1840]: E0129 11:27:19.613640 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:20.614588 kubelet[1840]: E0129 11:27:20.614492 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:21.615248 kubelet[1840]: E0129 11:27:21.615150 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:27:22.616480 kubelet[1840]: E0129 11:27:22.616357 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"