Jan 29 12:13:14.065592 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 12:13:14.065644 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:13:14.065654 kernel: BIOS-provided physical RAM map: Jan 29 12:13:14.065662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:13:14.065669 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:13:14.065697 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:13:14.065706 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 12:13:14.065714 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 12:13:14.065721 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:13:14.065729 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:13:14.065736 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 12:13:14.065744 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:13:14.065752 kernel: NX (Execute Disable) protection: active Jan 29 12:13:14.065762 kernel: APIC: Static calls initialized Jan 29 12:13:14.065771 kernel: SMBIOS 3.0.0 present. Jan 29 12:13:14.065779 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 12:13:14.065787 kernel: Hypervisor detected: KVM Jan 29 12:13:14.065794 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:13:14.065802 kernel: kvm-clock: using sched offset of 3709949706 cycles Jan 29 12:13:14.065812 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:13:14.065821 kernel: tsc: Detected 1996.249 MHz processor Jan 29 12:13:14.065829 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:13:14.065838 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:13:14.065846 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 12:13:14.065855 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:13:14.065863 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:13:14.065871 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 12:13:14.065879 kernel: ACPI: Early table checksum verification disabled Jan 29 12:13:14.065888 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 12:13:14.065897 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:13:14.065905 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:13:14.065913 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:13:14.065921 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 12:13:14.065929 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:13:14.065937 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:13:14.065945 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 12:13:14.065955 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 12:13:14.065963 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 12:13:14.065971 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 12:13:14.065979 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 12:13:14.065990 kernel: No NUMA configuration found Jan 29 12:13:14.065999 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 12:13:14.066007 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 12:13:14.066017 kernel: Zone ranges: Jan 29 12:13:14.066025 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:13:14.066034 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:13:14.066042 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:13:14.066051 kernel: Movable zone start for each node Jan 29 12:13:14.066059 kernel: Early memory node ranges Jan 29 12:13:14.066067 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:13:14.066076 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 12:13:14.066086 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:13:14.066094 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 12:13:14.066102 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:13:14.066111 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:13:14.066119 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 12:13:14.066128 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:13:14.066136 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:13:14.066144 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:13:14.066153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:13:14.066164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:13:14.066173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:13:14.066181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:13:14.066190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:13:14.066198 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:13:14.066206 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:13:14.066215 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:13:14.066223 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 12:13:14.066231 kernel: Booting paravirtualized kernel on KVM Jan 29 12:13:14.066242 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:13:14.066250 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:13:14.066259 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:13:14.066267 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:13:14.066275 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:13:14.066283 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 12:13:14.066293 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:13:14.066302 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:13:14.066313 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:13:14.066321 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:13:14.066330 kernel: Fallback order for Node 0: 0 Jan 29 12:13:14.066338 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 12:13:14.066346 kernel: Policy zone: Normal Jan 29 12:13:14.066355 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:13:14.066363 kernel: software IO TLB: area num 2. Jan 29 12:13:14.066372 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 227308K reserved, 0K cma-reserved) Jan 29 12:13:14.066381 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:13:14.066391 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 12:13:14.066399 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:13:14.066407 kernel: Dynamic Preempt: voluntary Jan 29 12:13:14.066416 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:13:14.066427 kernel: rcu: RCU event tracing is enabled. Jan 29 12:13:14.066436 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:13:14.066444 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:13:14.066453 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:13:14.066461 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:13:14.066472 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:13:14.066481 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:13:14.066489 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:13:14.066497 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:13:14.066506 kernel: Console: colour VGA+ 80x25 Jan 29 12:13:14.066514 kernel: printk: console [tty0] enabled Jan 29 12:13:14.066522 kernel: printk: console [ttyS0] enabled Jan 29 12:13:14.066531 kernel: ACPI: Core revision 20230628 Jan 29 12:13:14.066539 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:13:14.066550 kernel: x2apic enabled Jan 29 12:13:14.066559 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:13:14.066567 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:13:14.066576 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:13:14.066584 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 12:13:14.066592 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 12:13:14.066618 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 12:13:14.066628 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:13:14.066636 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:13:14.066648 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:13:14.066657 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:13:14.066665 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:13:14.066673 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 12:13:14.066682 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:13:14.066697 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:13:14.066708 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:13:14.066716 kernel: landlock: Up and running. Jan 29 12:13:14.066725 kernel: SELinux: Initializing. Jan 29 12:13:14.066734 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:13:14.066743 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:13:14.066752 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 12:13:14.066763 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:13:14.066772 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:13:14.066781 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:13:14.066790 kernel: Performance Events: AMD PMU driver. Jan 29 12:13:14.066799 kernel: ... version: 0 Jan 29 12:13:14.066809 kernel: ... bit width: 48 Jan 29 12:13:14.066818 kernel: ... generic registers: 4 Jan 29 12:13:14.066827 kernel: ... value mask: 0000ffffffffffff Jan 29 12:13:14.066836 kernel: ... max period: 00007fffffffffff Jan 29 12:13:14.066844 kernel: ... fixed-purpose events: 0 Jan 29 12:13:14.066853 kernel: ... event mask: 000000000000000f Jan 29 12:13:14.066862 kernel: signal: max sigframe size: 1440 Jan 29 12:13:14.066870 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:13:14.066880 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:13:14.066891 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:13:14.066900 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:13:14.066908 kernel: .... node #0, CPUs: #1 Jan 29 12:13:14.066917 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:13:14.066926 kernel: smpboot: Max logical packages: 2 Jan 29 12:13:14.066935 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 12:13:14.066943 kernel: devtmpfs: initialized Jan 29 12:13:14.066952 kernel: x86/mm: Memory block size: 128MB Jan 29 12:13:14.066961 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:13:14.066972 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:13:14.066981 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:13:14.066990 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:13:14.066998 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:13:14.067007 kernel: audit: type=2000 audit(1738152793.089:1): state=initialized audit_enabled=0 res=1 Jan 29 12:13:14.067016 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:13:14.067025 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:13:14.067034 kernel: cpuidle: using governor menu Jan 29 12:13:14.067042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:13:14.067053 kernel: dca service started, version 1.12.1 Jan 29 12:13:14.067062 kernel: PCI: Using configuration type 1 for base access Jan 29 12:13:14.067071 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:13:14.067080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:13:14.067088 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:13:14.067097 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:13:14.067106 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:13:14.067115 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:13:14.067123 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:13:14.067134 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:13:14.067143 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:13:14.067152 kernel: ACPI: Interpreter enabled Jan 29 12:13:14.067160 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:13:14.067169 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:13:14.067178 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:13:14.067187 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:13:14.067196 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 12:13:14.067205 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:13:14.067342 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:13:14.067441 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:13:14.067529 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:13:14.067543 kernel: acpiphp: Slot [3] registered Jan 29 12:13:14.067552 kernel: acpiphp: Slot [4] registered Jan 29 12:13:14.067560 kernel: acpiphp: Slot [5] registered Jan 29 12:13:14.067569 kernel: acpiphp: Slot [6] registered Jan 29 12:13:14.067578 kernel: acpiphp: Slot [7] registered Jan 29 12:13:14.067589 kernel: acpiphp: Slot [8] registered Jan 29 12:13:14.067598 kernel: acpiphp: Slot [9] registered Jan 29 12:13:14.067622 kernel: acpiphp: Slot [10] registered Jan 29 12:13:14.067630 kernel: acpiphp: Slot [11] registered Jan 29 12:13:14.067654 kernel: acpiphp: Slot [12] registered Jan 29 12:13:14.067662 kernel: acpiphp: Slot [13] registered Jan 29 12:13:14.067671 kernel: acpiphp: Slot [14] registered Jan 29 12:13:14.067679 kernel: acpiphp: Slot [15] registered Jan 29 12:13:14.067688 kernel: acpiphp: Slot [16] registered Jan 29 12:13:14.067699 kernel: acpiphp: Slot [17] registered Jan 29 12:13:14.067708 kernel: acpiphp: Slot [18] registered Jan 29 12:13:14.067716 kernel: acpiphp: Slot [19] registered Jan 29 12:13:14.067725 kernel: acpiphp: Slot [20] registered Jan 29 12:13:14.067733 kernel: acpiphp: Slot [21] registered Jan 29 12:13:14.067742 kernel: acpiphp: Slot [22] registered Jan 29 12:13:14.067751 kernel: acpiphp: Slot [23] registered Jan 29 12:13:14.067759 kernel: acpiphp: Slot [24] registered Jan 29 12:13:14.067768 kernel: acpiphp: Slot [25] registered Jan 29 12:13:14.067776 kernel: acpiphp: Slot [26] registered Jan 29 12:13:14.067787 kernel: acpiphp: Slot [27] registered Jan 29 12:13:14.067796 kernel: acpiphp: Slot [28] registered Jan 29 12:13:14.067805 kernel: acpiphp: Slot [29] registered Jan 29 12:13:14.067813 kernel: acpiphp: Slot [30] registered Jan 29 12:13:14.067823 kernel: acpiphp: Slot [31] registered Jan 29 12:13:14.067832 kernel: PCI host bridge to bus 0000:00 Jan 29 12:13:14.067933 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:13:14.068018 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:13:14.068106 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:13:14.068188 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:13:14.068272 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 12:13:14.068352 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:13:14.068464 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:13:14.068567 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:13:14.068698 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 12:13:14.068797 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 12:13:14.068894 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 12:13:14.068986 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 12:13:14.069092 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 12:13:14.069184 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 12:13:14.069282 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:13:14.069381 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 12:13:14.069473 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 12:13:14.069576 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 12:13:14.069784 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 12:13:14.069882 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 12:13:14.069976 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 12:13:14.070073 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 12:13:14.070165 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:13:14.070265 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:13:14.070370 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 12:13:14.070463 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 12:13:14.070554 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 12:13:14.070670 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 12:13:14.070769 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:13:14.070866 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:13:14.070958 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 12:13:14.071050 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 12:13:14.071161 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 12:13:14.071254 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 12:13:14.071344 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 12:13:14.071448 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:13:14.071540 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 12:13:14.071667 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 12:13:14.071763 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 12:13:14.071777 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:13:14.071786 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:13:14.071795 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:13:14.071805 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:13:14.071817 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:13:14.071826 kernel: iommu: Default domain type: Translated Jan 29 12:13:14.071835 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:13:14.071844 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:13:14.071852 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:13:14.071861 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:13:14.071869 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 12:13:14.071964 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 12:13:14.072056 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 12:13:14.072153 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:13:14.072167 kernel: vgaarb: loaded Jan 29 12:13:14.072176 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:13:14.072184 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:13:14.072193 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:13:14.072202 kernel: pnp: PnP ACPI init Jan 29 12:13:14.072295 kernel: pnp 00:03: [dma 2] Jan 29 12:13:14.072310 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:13:14.072319 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:13:14.072332 kernel: NET: Registered PF_INET protocol family Jan 29 12:13:14.072340 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:13:14.072349 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:13:14.072358 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:13:14.072367 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:13:14.072376 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:13:14.072385 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:13:14.072394 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:13:14.072405 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:13:14.072413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:13:14.072422 kernel: NET: Registered PF_XDP protocol family Jan 29 12:13:14.072504 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:13:14.072585 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:13:14.072720 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:13:14.072800 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 12:13:14.072877 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 12:13:14.072968 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 12:13:14.073078 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:13:14.073092 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:13:14.073102 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:13:14.073111 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 12:13:14.073122 kernel: Initialise system trusted keyrings Jan 29 12:13:14.073131 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:13:14.073140 kernel: Key type asymmetric registered Jan 29 12:13:14.073149 kernel: Asymmetric key parser 'x509' registered Jan 29 12:13:14.073162 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:13:14.073172 kernel: io scheduler mq-deadline registered Jan 29 12:13:14.073181 kernel: io scheduler kyber registered Jan 29 12:13:14.073190 kernel: io scheduler bfq registered Jan 29 12:13:14.073200 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:13:14.073210 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 12:13:14.073219 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:13:14.073229 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:13:14.073239 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:13:14.073251 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:13:14.073261 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:13:14.073270 kernel: random: crng init done Jan 29 12:13:14.073280 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:13:14.073289 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:13:14.073298 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:13:14.073397 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:13:14.073412 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:13:14.073502 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:13:14.073591 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:13:13 UTC (1738152793) Jan 29 12:13:14.073722 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 12:13:14.073738 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:13:14.073747 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:13:14.073757 kernel: Segment Routing with IPv6 Jan 29 12:13:14.073766 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:13:14.073776 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:13:14.073785 kernel: Key type dns_resolver registered Jan 29 12:13:14.073799 kernel: IPI shorthand broadcast: enabled Jan 29 12:13:14.073808 kernel: sched_clock: Marking stable (968010020, 169213619)->(1186582194, -49358555) Jan 29 12:13:14.073818 kernel: registered taskstats version 1 Jan 29 12:13:14.073827 kernel: Loading compiled-in X.509 certificates Jan 29 12:13:14.073837 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 12:13:14.073846 kernel: Key type .fscrypt registered Jan 29 12:13:14.073855 kernel: Key type fscrypt-provisioning registered Jan 29 12:13:14.073864 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:13:14.073876 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:13:14.073885 kernel: ima: No architecture policies found Jan 29 12:13:14.073894 kernel: clk: Disabling unused clocks Jan 29 12:13:14.073904 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 12:13:14.073913 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:13:14.073923 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 12:13:14.073932 kernel: Run /init as init process Jan 29 12:13:14.073941 kernel: with arguments: Jan 29 12:13:14.073951 kernel: /init Jan 29 12:13:14.073960 kernel: with environment: Jan 29 12:13:14.073972 kernel: HOME=/ Jan 29 12:13:14.073981 kernel: TERM=linux Jan 29 12:13:14.073990 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:13:14.074002 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:13:14.074014 systemd[1]: Detected virtualization kvm. Jan 29 12:13:14.074024 systemd[1]: Detected architecture x86-64. Jan 29 12:13:14.074034 systemd[1]: Running in initrd. Jan 29 12:13:14.074046 systemd[1]: No hostname configured, using default hostname. Jan 29 12:13:14.074057 systemd[1]: Hostname set to . Jan 29 12:13:14.074068 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:13:14.074077 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:13:14.074088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:13:14.074098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:13:14.074109 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:13:14.074129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:13:14.074142 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:13:14.074153 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:13:14.074165 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:13:14.074176 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:13:14.074189 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:13:14.074199 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:13:14.074209 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:13:14.074220 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:13:14.074230 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:13:14.074240 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:13:14.074251 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:13:14.074261 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:13:14.074272 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:13:14.074284 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:13:14.074295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:13:14.074305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:13:14.074315 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:13:14.074326 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:13:14.074336 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:13:14.074347 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:13:14.074357 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:13:14.074369 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:13:14.074380 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:13:14.074390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:13:14.074418 systemd-journald[185]: Collecting audit messages is disabled. Jan 29 12:13:14.074445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:13:14.074456 systemd-journald[185]: Journal started Jan 29 12:13:14.074479 systemd-journald[185]: Runtime Journal (/run/log/journal/9dfd7d4835b945658b0382b20797cc84) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:13:14.091432 systemd-modules-load[186]: Inserted module 'overlay' Jan 29 12:13:14.099835 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:13:14.102955 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:13:14.104659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:13:14.106849 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:13:14.125734 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:13:14.138789 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:13:14.138749 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:13:14.174443 kernel: Bridge firewalling registered Jan 29 12:13:14.139810 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 29 12:13:14.179905 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:13:14.180819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:13:14.183035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:13:14.199846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:13:14.202732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:13:14.206350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:13:14.207657 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:13:14.218022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:13:14.218816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:13:14.225823 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:13:14.229738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:13:14.230616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:13:14.250640 dracut-cmdline[219]: dracut-dracut-053 Jan 29 12:13:14.253346 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 12:13:14.269249 systemd-resolved[220]: Positive Trust Anchors: Jan 29 12:13:14.269267 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:13:14.269310 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:13:14.272550 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 29 12:13:14.273798 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:13:14.275227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:13:14.343675 kernel: SCSI subsystem initialized Jan 29 12:13:14.354710 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:13:14.366675 kernel: iscsi: registered transport (tcp) Jan 29 12:13:14.388724 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:13:14.388785 kernel: QLogic iSCSI HBA Driver Jan 29 12:13:14.449862 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:13:14.458862 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:13:14.510999 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:13:14.511053 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:13:14.513165 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:13:14.579735 kernel: raid6: sse2x4 gen() 5208 MB/s Jan 29 12:13:14.598712 kernel: raid6: sse2x2 gen() 5983 MB/s Jan 29 12:13:14.616950 kernel: raid6: sse2x1 gen() 8736 MB/s Jan 29 12:13:14.617022 kernel: raid6: using algorithm sse2x1 gen() 8736 MB/s Jan 29 12:13:14.636007 kernel: raid6: .... xor() 7323 MB/s, rmw enabled Jan 29 12:13:14.636079 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 12:13:14.657668 kernel: xor: measuring software checksum speed Jan 29 12:13:14.657733 kernel: prefetch64-sse : 17016 MB/sec Jan 29 12:13:14.660097 kernel: generic_sse : 15700 MB/sec Jan 29 12:13:14.660142 kernel: xor: using function: prefetch64-sse (17016 MB/sec) Jan 29 12:13:14.845664 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:13:14.860443 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:13:14.869924 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:13:14.901113 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 29 12:13:14.907746 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:13:14.920279 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:13:14.948226 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 29 12:13:15.000316 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:13:15.007957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:13:15.063674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:13:15.075993 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:13:15.094795 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:13:15.099914 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:13:15.104350 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:13:15.107400 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:13:15.116908 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:13:15.143000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:13:15.159639 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 12:13:15.180247 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 12:13:15.180366 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:13:15.180381 kernel: GPT:17805311 != 20971519 Jan 29 12:13:15.180392 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:13:15.180405 kernel: GPT:17805311 != 20971519 Jan 29 12:13:15.180415 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:13:15.180426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:13:15.211721 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:13:15.213815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:13:15.214803 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:13:15.215464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:13:15.216363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:13:15.219706 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:13:15.233962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:13:15.245398 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Jan 29 12:13:15.245425 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (463) Jan 29 12:13:15.269646 kernel: libata version 3.00 loaded. Jan 29 12:13:15.274137 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 12:13:15.277297 kernel: scsi host0: ata_piix Jan 29 12:13:15.277435 kernel: scsi host1: ata_piix Jan 29 12:13:15.277558 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 12:13:15.277574 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 12:13:15.275892 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:13:15.313461 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:13:15.314305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:13:15.320076 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:13:15.320702 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:13:15.328943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:13:15.338980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:13:15.344872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:13:15.355516 disk-uuid[503]: Primary Header is updated. Jan 29 12:13:15.355516 disk-uuid[503]: Secondary Entries is updated. Jan 29 12:13:15.355516 disk-uuid[503]: Secondary Header is updated. Jan 29 12:13:15.362895 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:13:15.381019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:13:16.457849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:13:16.459413 disk-uuid[504]: The operation has completed successfully. Jan 29 12:13:16.531105 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:13:16.531360 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:13:16.568721 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:13:16.575848 sh[524]: Success Jan 29 12:13:16.603692 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 12:13:16.673670 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:13:16.675701 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:13:16.678147 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:13:16.709964 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 12:13:16.710028 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:13:16.713503 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:13:16.717180 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:13:16.719956 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:13:16.739080 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:13:16.740230 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:13:16.749811 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:13:16.752862 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:13:16.786566 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:13:16.786637 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:13:16.786651 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:13:16.800647 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:13:16.832974 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:13:16.836620 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:13:16.850427 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:13:16.856806 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:13:16.896921 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:13:16.906783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:13:16.931377 systemd-networkd[707]: lo: Link UP Jan 29 12:13:16.931393 systemd-networkd[707]: lo: Gained carrier Jan 29 12:13:16.932835 systemd-networkd[707]: Enumeration completed Jan 29 12:13:16.933728 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:13:16.933755 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:13:16.933760 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:13:16.936461 systemd-networkd[707]: eth0: Link UP Jan 29 12:13:16.936469 systemd-networkd[707]: eth0: Gained carrier Jan 29 12:13:16.936489 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:13:16.938489 systemd[1]: Reached target network.target - Network. Jan 29 12:13:16.951938 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.137/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:13:17.625172 ignition[671]: Ignition 2.20.0 Jan 29 12:13:17.625202 ignition[671]: Stage: fetch-offline Jan 29 12:13:17.625285 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:13:17.625308 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:13:17.630289 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:13:17.625544 ignition[671]: parsed url from cmdline: "" Jan 29 12:13:17.625558 ignition[671]: no config URL provided Jan 29 12:13:17.625578 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:13:17.625656 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:13:17.639987 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:13:17.625674 ignition[671]: failed to fetch config: resource requires networking Jan 29 12:13:17.626227 ignition[671]: Ignition finished successfully Jan 29 12:13:17.669750 ignition[716]: Ignition 2.20.0 Jan 29 12:13:17.670559 ignition[716]: Stage: fetch Jan 29 12:13:17.670759 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:13:17.670770 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:13:17.671855 ignition[716]: parsed url from cmdline: "" Jan 29 12:13:17.671859 ignition[716]: no config URL provided Jan 29 12:13:17.671865 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:13:17.671876 ignition[716]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:13:17.671966 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 12:13:17.671987 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 12:13:17.672005 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 12:13:17.918867 ignition[716]: GET result: OK Jan 29 12:13:17.918994 ignition[716]: parsing config with SHA512: df9edaa0edc8116f66c19327be5f89beec353b046c2b826dc5f3c1e9796dd6e635e71a99aa45644b614b3868af628b4492a3f27d8c4642ec3e3e156cfaf1210a Jan 29 12:13:17.925781 unknown[716]: fetched base config from "system" Jan 29 12:13:17.926036 unknown[716]: fetched base config from "system" Jan 29 12:13:17.926924 ignition[716]: fetch: fetch complete Jan 29 12:13:17.926120 unknown[716]: fetched user config from "openstack" Jan 29 12:13:17.926937 ignition[716]: fetch: fetch passed Jan 29 12:13:17.930379 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:13:17.927072 ignition[716]: Ignition finished successfully Jan 29 12:13:17.940972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:13:17.975286 ignition[723]: Ignition 2.20.0 Jan 29 12:13:17.975306 ignition[723]: Stage: kargs Jan 29 12:13:17.975745 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:13:17.975773 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:13:17.979959 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:13:17.977701 ignition[723]: kargs: kargs passed Jan 29 12:13:17.977803 ignition[723]: Ignition finished successfully Jan 29 12:13:17.992960 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:13:18.037760 ignition[729]: Ignition 2.20.0 Jan 29 12:13:18.037870 ignition[729]: Stage: disks Jan 29 12:13:18.038327 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:13:18.042163 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:13:18.038354 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:13:18.045551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:13:18.040176 ignition[729]: disks: disks passed Jan 29 12:13:18.047487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:13:18.040267 ignition[729]: Ignition finished successfully Jan 29 12:13:18.050682 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:13:18.053706 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:13:18.056089 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:13:18.065939 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:13:18.111260 systemd-fsck[737]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:13:18.123132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:13:18.131828 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:13:18.300649 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 12:13:18.300884 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:13:18.301923 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:13:18.308803 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:13:18.312290 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:13:18.316532 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:13:18.338408 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (745) Jan 29 12:13:18.338458 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:13:18.338489 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:13:18.338518 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:13:18.338546 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:13:18.321244 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 12:13:18.343257 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:13:18.343345 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:13:18.352578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:13:18.354596 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:13:18.371888 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:13:18.487625 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:13:18.493540 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:13:18.499845 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:13:18.504106 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:13:18.610966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:13:18.617819 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:13:18.620909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:13:18.632317 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:13:18.634060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:13:18.670508 ignition[864]: INFO : Ignition 2.20.0 Jan 29 12:13:18.671916 ignition[864]: INFO : Stage: mount Jan 29 12:13:18.672473 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:13:18.672473 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:13:18.677194 ignition[864]: INFO : mount: mount passed Jan 29 12:13:18.677194 ignition[864]: INFO : Ignition finished successfully Jan 29 12:13:18.678011 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:13:18.679400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:13:18.731895 systemd-networkd[707]: eth0: Gained IPv6LL Jan 29 12:13:25.570228 coreos-metadata[747]: Jan 29 12:13:25.570 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:13:25.612076 coreos-metadata[747]: Jan 29 12:13:25.611 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:13:25.628428 coreos-metadata[747]: Jan 29 12:13:25.628 INFO Fetch successful Jan 29 12:13:25.630107 coreos-metadata[747]: Jan 29 12:13:25.629 INFO wrote hostname ci-4152-2-0-0-aca045361a.novalocal to /sysroot/etc/hostname Jan 29 12:13:25.632106 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 12:13:25.632315 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 12:13:25.648806 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:13:25.669908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:13:25.698703 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) Jan 29 12:13:25.706334 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 12:13:25.706397 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:13:25.709662 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:13:25.721674 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:13:25.726513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:13:25.769697 ignition[899]: INFO : Ignition 2.20.0 Jan 29 12:13:25.769697 ignition[899]: INFO : Stage: files Jan 29 12:13:25.774385 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:13:25.774385 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:13:25.775945 ignition[899]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:13:25.777732 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:13:25.778520 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:13:25.786548 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:13:25.787507 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:13:25.788381 unknown[899]: wrote ssh authorized keys file for user: core Jan 29 12:13:25.789181 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:13:25.794758 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:13:25.795724 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:13:25.796627 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:13:25.797553 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:13:25.797553 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:13:25.797553 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:13:25.800810 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:13:25.800810 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 12:13:26.216134 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 12:13:27.882660 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:13:27.884446 ignition[899]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:13:27.884446 ignition[899]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:13:27.884446 ignition[899]: INFO : files: files passed Jan 29 12:13:27.884446 ignition[899]: INFO : Ignition finished successfully Jan 29 12:13:27.884483 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:13:27.898906 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:13:27.901866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:13:27.904440 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:13:27.905257 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:13:27.920860 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:13:27.920860 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:13:27.923579 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:13:27.924810 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:13:27.926180 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:13:27.938810 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:13:27.963490 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:13:27.963652 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:13:27.965254 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:13:27.966431 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:13:27.967867 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:13:27.973799 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:13:27.987233 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:13:27.991795 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:13:28.005997 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:13:28.006838 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:13:28.008227 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:13:28.009523 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:13:28.009748 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:13:28.011236 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:13:28.012180 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:13:28.013584 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:13:28.014894 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:13:28.016071 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:13:28.017414 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:13:28.022702 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:13:28.024060 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:13:28.025369 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:13:28.026803 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:13:28.028090 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:13:28.028282 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:13:28.030001 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:13:28.031423 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:13:28.032688 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:13:28.033032 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:13:28.034037 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:13:28.034206 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:13:28.035817 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:13:28.035995 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:13:28.036830 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:13:28.036987 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:13:28.048149 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:13:28.050926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:13:28.051576 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:13:28.052667 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:13:28.057824 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:13:28.058058 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:13:28.066352 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:13:28.066467 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:13:28.076466 ignition[951]: INFO : Ignition 2.20.0 Jan 29 12:13:28.079846 ignition[951]: INFO : Stage: umount Jan 29 12:13:28.079846 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:13:28.079846 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:13:28.085821 ignition[951]: INFO : umount: umount passed Jan 29 12:13:28.085821 ignition[951]: INFO : Ignition finished successfully Jan 29 12:13:28.086440 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:13:28.086970 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:13:28.087078 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:13:28.088171 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:13:28.088693 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:13:28.089931 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:13:28.089976 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:13:28.090508 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:13:28.090548 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:13:28.091095 systemd[1]: Stopped target network.target - Network. Jan 29 12:13:28.091798 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:13:28.091848 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:13:28.092958 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:13:28.094010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:13:28.097773 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:13:28.098621 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:13:28.099777 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:13:28.100880 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:13:28.100916 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:13:28.102889 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:13:28.102923 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:13:28.104715 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:13:28.104760 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:13:28.105812 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:13:28.105855 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:13:28.106926 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:13:28.107902 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:13:28.109092 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:13:28.109176 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:13:28.110272 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:13:28.110348 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:13:28.111664 systemd-networkd[707]: eth0: DHCPv6 lease lost Jan 29 12:13:28.113201 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:13:28.113293 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:13:28.115102 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:13:28.115134 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:13:28.122763 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:13:28.123296 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:13:28.123348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:13:28.125582 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:13:28.127141 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:13:28.127236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:13:28.135899 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:13:28.136026 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:13:28.136992 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:13:28.137140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:13:28.139171 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:13:28.139219 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:13:28.140353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:13:28.140383 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:13:28.141424 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:13:28.141471 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:13:28.143085 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:13:28.143125 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:13:28.144224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:13:28.144263 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:13:28.154770 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:13:28.156761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:13:28.156814 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:13:28.157360 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:13:28.157400 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:13:28.161702 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:13:28.161743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:13:28.162866 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:13:28.162906 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:13:28.164033 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:13:28.164073 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:13:28.165240 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:13:28.165281 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:13:28.166555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:13:28.166596 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:13:28.168310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:13:28.168391 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:13:28.169560 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:13:28.177814 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:13:28.182970 systemd[1]: Switching root. Jan 29 12:13:28.212834 systemd-journald[185]: Journal stopped Jan 29 12:13:30.505131 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 29 12:13:30.505188 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:13:30.505211 kernel: SELinux: policy capability open_perms=1 Jan 29 12:13:30.505223 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:13:30.505235 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:13:30.505247 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:13:30.505266 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:13:30.505277 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:13:30.505289 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:13:30.505305 kernel: audit: type=1403 audit(1738152809.302:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:13:30.505318 systemd[1]: Successfully loaded SELinux policy in 123.358ms. Jan 29 12:13:30.505341 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.258ms. Jan 29 12:13:30.505355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:13:30.505370 systemd[1]: Detected virtualization kvm. Jan 29 12:13:30.505384 systemd[1]: Detected architecture x86-64. Jan 29 12:13:30.505396 systemd[1]: Detected first boot. Jan 29 12:13:30.505410 systemd[1]: Hostname set to . Jan 29 12:13:30.505425 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:13:30.505439 zram_generator::config[993]: No configuration found. Jan 29 12:13:30.505453 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:13:30.505466 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:13:30.505479 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:13:30.505492 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:13:30.505505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:13:30.505518 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:13:30.505531 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:13:30.505545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:13:30.505558 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:13:30.505571 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:13:30.505584 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:13:30.505597 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:13:30.505637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:13:30.505652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:13:30.505665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:13:30.505681 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:13:30.505694 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:13:30.505707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:13:30.505720 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:13:30.505733 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:13:30.505745 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:13:30.505759 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:13:30.505773 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:13:30.505786 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:13:30.505799 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:13:30.505811 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:13:30.505823 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:13:30.505838 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:13:30.505851 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:13:30.505863 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:13:30.505876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:13:30.505891 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:13:30.505903 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:13:30.505916 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:13:30.505928 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:13:30.505940 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:13:30.505953 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:13:30.505965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:13:30.505978 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:13:30.505991 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:13:30.506006 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:13:30.506019 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:13:30.506032 systemd[1]: Reached target machines.target - Containers. Jan 29 12:13:30.506045 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:13:30.506057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:13:30.506070 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:13:30.506082 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:13:30.506095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:13:30.506111 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:13:30.506123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:13:30.506136 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:13:30.506149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:13:30.506162 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:13:30.506175 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:13:30.506187 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:13:30.506200 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:13:30.506214 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:13:30.506226 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:13:30.506238 kernel: fuse: init (API version 7.39) Jan 29 12:13:30.506250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:13:30.506264 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:13:30.506277 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:13:30.506294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:13:30.506306 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:13:30.506319 systemd[1]: Stopped verity-setup.service. Jan 29 12:13:30.506331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:13:30.506346 kernel: ACPI: bus type drm_connector registered Jan 29 12:13:30.506358 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:13:30.506371 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:13:30.506384 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:13:30.506396 kernel: loop: module loaded Jan 29 12:13:30.506408 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:13:30.506421 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:13:30.506433 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:13:30.506448 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:13:30.506461 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:13:30.506490 systemd-journald[1096]: Collecting audit messages is disabled. Jan 29 12:13:30.506520 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:13:30.506536 systemd-journald[1096]: Journal started Jan 29 12:13:30.506561 systemd-journald[1096]: Runtime Journal (/run/log/journal/9dfd7d4835b945658b0382b20797cc84) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:13:30.130830 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:13:30.158368 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:13:30.158775 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:13:30.513032 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:13:30.513070 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:13:30.514255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:13:30.514417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:13:30.515154 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:13:30.515278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:13:30.516019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:13:30.516130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:13:30.516908 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:13:30.517030 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:13:30.517856 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:13:30.518005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:13:30.518819 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:13:30.519504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:13:30.520289 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:13:30.528257 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:13:30.534721 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:13:30.538696 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:13:30.539314 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:13:30.539350 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:13:30.541196 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:13:30.546518 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:13:30.553781 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:13:30.554489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:13:30.560762 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:13:30.572840 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:13:30.574768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:13:30.577707 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:13:30.578270 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:13:30.579122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:13:30.581297 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:13:30.583764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:13:30.586326 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:13:30.587389 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:13:30.588224 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:13:30.597233 systemd-journald[1096]: Time spent on flushing to /var/log/journal/9dfd7d4835b945658b0382b20797cc84 is 65.544ms for 929 entries. Jan 29 12:13:30.597233 systemd-journald[1096]: System Journal (/var/log/journal/9dfd7d4835b945658b0382b20797cc84) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:13:30.700524 systemd-journald[1096]: Received client request to flush runtime journal. Jan 29 12:13:30.700568 kernel: loop0: detected capacity change from 0 to 140992 Jan 29 12:13:30.613480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:13:30.623758 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:13:30.649417 udevadm[1133]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:13:30.650902 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:13:30.651690 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:13:30.664772 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:13:30.667474 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:13:30.678090 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 29 12:13:30.678106 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jan 29 12:13:30.682357 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:13:30.684843 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:13:30.703951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:13:30.742643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:13:30.750285 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:13:30.751808 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:13:30.768627 kernel: loop1: detected capacity change from 0 to 218376 Jan 29 12:13:30.772210 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:13:30.781740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:13:30.814509 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 29 12:13:30.814535 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 29 12:13:30.820890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:13:30.868779 kernel: loop2: detected capacity change from 0 to 8 Jan 29 12:13:30.893641 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 12:13:30.990410 kernel: loop4: detected capacity change from 0 to 140992 Jan 29 12:13:31.053623 kernel: loop5: detected capacity change from 0 to 218376 Jan 29 12:13:31.102872 kernel: loop6: detected capacity change from 0 to 8 Jan 29 12:13:31.106053 kernel: loop7: detected capacity change from 0 to 138184 Jan 29 12:13:31.184687 (sd-merge)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 12:13:31.185486 (sd-merge)[1155]: Merged extensions into '/usr'. Jan 29 12:13:31.197122 systemd[1]: Reloading requested from client PID 1126 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:13:31.197140 systemd[1]: Reloading... Jan 29 12:13:31.277633 zram_generator::config[1181]: No configuration found. Jan 29 12:13:31.463426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:13:31.557050 systemd[1]: Reloading finished in 357 ms. Jan 29 12:13:31.593343 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:13:31.594452 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:13:31.603797 systemd[1]: Starting ensure-sysext.service... Jan 29 12:13:31.606366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:13:31.612016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:13:31.618066 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:13:31.618083 systemd[1]: Reloading... Jan 29 12:13:31.643062 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:13:31.645034 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:13:31.645961 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:13:31.646269 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 29 12:13:31.646326 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 29 12:13:31.660994 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:13:31.661110 systemd-tmpfiles[1238]: Skipping /boot Jan 29 12:13:31.681168 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:13:31.682645 systemd-tmpfiles[1238]: Skipping /boot Jan 29 12:13:31.683719 zram_generator::config[1264]: No configuration found. Jan 29 12:13:31.698761 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jan 29 12:13:31.834511 ldconfig[1121]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:13:31.859645 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1328) Jan 29 12:13:31.909314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:13:31.966640 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 12:13:31.989701 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 29 12:13:32.006651 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:13:32.015220 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:13:32.015552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:13:32.016598 systemd[1]: Reloading finished in 398 ms. Jan 29 12:13:32.026676 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:13:32.030284 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:13:32.031243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:13:32.038007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:13:32.061741 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:13:32.075338 systemd[1]: Finished ensure-sysext.service. Jan 29 12:13:32.077646 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 12:13:32.077718 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 12:13:32.082772 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:13:32.082842 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 12:13:32.082860 kernel: [drm] features: -context_init Jan 29 12:13:32.087108 kernel: [drm] number of scanouts: 1 Jan 29 12:13:32.087167 kernel: [drm] number of cap sets: 0 Jan 29 12:13:32.089408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:13:32.089626 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 12:13:32.095154 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 12:13:32.102511 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 12:13:32.102560 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 12:13:32.109638 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 12:13:32.121779 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:13:32.121995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:13:32.124898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:13:32.127352 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:13:32.130814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:13:32.133832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:13:32.134018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:13:32.136094 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:13:32.140739 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:13:32.149855 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:13:32.157687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:13:32.167815 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:13:32.170740 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:13:32.172881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:13:32.173757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:13:32.174568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:13:32.175549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:13:32.177411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:13:32.179690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:13:32.180932 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:13:32.181566 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:13:32.182759 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:13:32.182877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:13:32.183136 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:13:32.193724 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:13:32.193892 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:13:32.200784 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:13:32.208735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:13:32.210481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:13:32.210726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:13:32.211923 augenrules[1397]: No rules Jan 29 12:13:32.217714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:13:32.219941 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:13:32.221598 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:13:32.225365 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 12:13:32.238003 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:13:32.258398 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:13:32.304015 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:13:32.304998 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:13:32.315772 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:13:32.345867 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:13:32.352099 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:13:32.354224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:13:32.362804 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:13:32.385673 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:13:32.386828 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:13:32.393596 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:13:32.416033 systemd-networkd[1373]: lo: Link UP Jan 29 12:13:32.416043 systemd-networkd[1373]: lo: Gained carrier Jan 29 12:13:32.417248 systemd-networkd[1373]: Enumeration completed Jan 29 12:13:32.417340 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:13:32.421692 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:13:32.421703 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:13:32.422331 systemd-networkd[1373]: eth0: Link UP Jan 29 12:13:32.422339 systemd-networkd[1373]: eth0: Gained carrier Jan 29 12:13:32.422352 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:13:32.430867 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:13:32.431680 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:13:32.434536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:13:32.440830 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:13:32.443419 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:13:32.447386 systemd-networkd[1373]: eth0: DHCPv4 address 172.24.4.137/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:13:32.448091 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jan 29 12:13:32.463332 systemd-resolved[1378]: Positive Trust Anchors: Jan 29 12:13:32.463355 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:13:32.463397 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:13:32.468665 systemd-resolved[1378]: Using system hostname 'ci-4152-2-0-0-aca045361a.novalocal'. Jan 29 12:13:32.470130 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:13:32.471899 systemd[1]: Reached target network.target - Network. Jan 29 12:13:32.472443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:13:32.472925 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:13:32.473485 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:13:32.474414 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:13:32.475494 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:13:32.476220 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:13:32.477563 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:13:32.479060 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:13:32.479090 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:13:32.480581 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:13:32.970337 systemd-resolved[1378]: Clock change detected. Flushing caches. Jan 29 12:13:32.970420 systemd-timesyncd[1381]: Contacted time server 45.132.96.81:123 (0.flatcar.pool.ntp.org). Jan 29 12:13:32.970474 systemd-timesyncd[1381]: Initial clock synchronization to Wed 2025-01-29 12:13:32.970271 UTC. Jan 29 12:13:32.971628 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:13:32.976466 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:13:32.983181 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:13:32.984875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:13:32.987513 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:13:32.988924 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:13:32.990719 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:13:32.990752 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:13:32.997034 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:13:33.002569 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:13:33.015181 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:13:33.027470 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:13:33.032212 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:13:33.033535 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:13:33.039379 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:13:33.047095 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:13:33.059136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:13:33.070183 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:13:33.070541 dbus-daemon[1432]: [system] SELinux support is enabled Jan 29 12:13:33.071276 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:13:33.071760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:13:33.077268 jq[1435]: false Jan 29 12:13:33.080085 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:13:33.087007 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:13:33.090309 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:13:33.098986 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:13:33.099192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:13:33.099473 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:13:33.099618 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:13:33.106425 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:13:33.106479 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:13:33.108517 update_engine[1442]: I20250129 12:13:33.108456 1442 main.cc:92] Flatcar Update Engine starting Jan 29 12:13:33.113266 update_engine[1442]: I20250129 12:13:33.109944 1442 update_check_scheduler.cc:74] Next update check in 7m47s Jan 29 12:13:33.112081 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:13:33.112105 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:13:33.114591 extend-filesystems[1436]: Found loop4 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found loop5 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found loop6 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found loop7 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda1 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda2 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda3 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found usr Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda4 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda6 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda7 Jan 29 12:13:33.117245 extend-filesystems[1436]: Found vda9 Jan 29 12:13:33.117245 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 29 12:13:33.240841 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 12:13:33.240900 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1335) Jan 29 12:13:33.240941 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 12:13:33.118108 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:13:33.241137 jq[1443]: true Jan 29 12:13:33.241308 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 29 12:13:33.132186 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:13:33.249208 jq[1458]: true Jan 29 12:13:33.249417 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:13:33.164389 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:13:33.256608 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:13:33.256608 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:13:33.256608 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 12:13:33.164561 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:13:33.278034 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 29 12:13:33.183140 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:13:33.253286 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:13:33.253471 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:13:33.284392 systemd-logind[1441]: New seat seat0. Jan 29 12:13:33.301865 systemd-logind[1441]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 12:13:33.304191 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:13:33.314961 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:13:33.330840 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:13:33.336266 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:13:33.338409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:13:33.351330 systemd[1]: Starting sshkeys.service... Jan 29 12:13:33.378468 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:13:33.388352 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:13:33.593541 containerd[1462]: time="2025-01-29T12:13:33.593414040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 12:13:33.641176 containerd[1462]: time="2025-01-29T12:13:33.640954101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643148106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643178974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643195705Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643345165Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643364241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643426057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643442247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643588832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643605554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643620542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:13:33.643947 containerd[1462]: time="2025-01-29T12:13:33.643631693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:13:33.644185 containerd[1462]: time="2025-01-29T12:13:33.643707515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:13:33.644185 containerd[1462]: time="2025-01-29T12:13:33.643902130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:13:33.644345 containerd[1462]: time="2025-01-29T12:13:33.644312670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:13:33.644410 containerd[1462]: time="2025-01-29T12:13:33.644396377Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:13:33.644536 containerd[1462]: time="2025-01-29T12:13:33.644518827Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:13:33.644636 containerd[1462]: time="2025-01-29T12:13:33.644619666Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:13:33.650537 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:13:33.657881 containerd[1462]: time="2025-01-29T12:13:33.657835692Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:13:33.658041 containerd[1462]: time="2025-01-29T12:13:33.657902277Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:13:33.658041 containerd[1462]: time="2025-01-29T12:13:33.657962971Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:13:33.658041 containerd[1462]: time="2025-01-29T12:13:33.657988769Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:13:33.658041 containerd[1462]: time="2025-01-29T12:13:33.658007655Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:13:33.658415 containerd[1462]: time="2025-01-29T12:13:33.658137408Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.658808046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.658952216Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.658982343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659010105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659033088Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659067653Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659090165Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659115683Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659137704Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659152632Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659172439Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659190583Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659218486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.659743 containerd[1462]: time="2025-01-29T12:13:33.659240597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659262619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659283858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659307222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659329163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659349271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659370000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659389767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659412850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659427337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659448267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659467633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659489915Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659519440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659554265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.660061 containerd[1462]: time="2025-01-29T12:13:33.659574343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660591771Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660628661Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660648287Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660667353Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660683614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660700105Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660716996Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:13:33.661812 containerd[1462]: time="2025-01-29T12:13:33.660733247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:13:33.662016 containerd[1462]: time="2025-01-29T12:13:33.661071371Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:13:33.662016 containerd[1462]: time="2025-01-29T12:13:33.661133097Z" level=info msg="Connect containerd service" Jan 29 12:13:33.662016 containerd[1462]: time="2025-01-29T12:13:33.661165167Z" level=info msg="using legacy CRI server" Jan 29 12:13:33.662016 containerd[1462]: time="2025-01-29T12:13:33.661177520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:13:33.662016 containerd[1462]: time="2025-01-29T12:13:33.661315058Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:13:33.662016 containerd[1462]: time="2025-01-29T12:13:33.661970197Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:13:33.662445 containerd[1462]: time="2025-01-29T12:13:33.662421814Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:13:33.662499 containerd[1462]: time="2025-01-29T12:13:33.662480944Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:13:33.662539 containerd[1462]: time="2025-01-29T12:13:33.662435199Z" level=info msg="Start subscribing containerd event" Jan 29 12:13:33.662572 containerd[1462]: time="2025-01-29T12:13:33.662547930Z" level=info msg="Start recovering state" Jan 29 12:13:33.662622 containerd[1462]: time="2025-01-29T12:13:33.662603124Z" level=info msg="Start event monitor" Jan 29 12:13:33.662652 containerd[1462]: time="2025-01-29T12:13:33.662625255Z" level=info msg="Start snapshots syncer" Jan 29 12:13:33.662652 containerd[1462]: time="2025-01-29T12:13:33.662635444Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:13:33.662652 containerd[1462]: time="2025-01-29T12:13:33.662644682Z" level=info msg="Start streaming server" Jan 29 12:13:33.663072 containerd[1462]: time="2025-01-29T12:13:33.662698382Z" level=info msg="containerd successfully booted in 0.070238s" Jan 29 12:13:33.662773 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:13:33.680513 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:13:33.690274 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:13:33.699066 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:13:33.699261 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:13:33.714265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:13:33.722834 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:13:33.730730 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:13:33.741763 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:13:33.744436 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:13:33.767990 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:13:33.783799 systemd[1]: Started sshd@0-172.24.4.137:22-172.24.4.1:59522.service - OpenSSH per-connection server daemon (172.24.4.1:59522). Jan 29 12:13:34.771385 systemd-networkd[1373]: eth0: Gained IPv6LL Jan 29 12:13:34.777672 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:13:34.783561 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:13:34.796585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:13:34.805619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:13:34.871683 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:13:35.412133 sshd[1522]: Accepted publickey for core from 172.24.4.1 port 59522 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:35.416609 sshd-session[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:35.437205 systemd-logind[1441]: New session 1 of user core. Jan 29 12:13:35.438311 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:13:35.448772 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:13:35.466155 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:13:35.479343 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:13:35.499251 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:13:35.671583 systemd[1539]: Queued start job for default target default.target. Jan 29 12:13:35.674839 systemd[1539]: Created slice app.slice - User Application Slice. Jan 29 12:13:35.674863 systemd[1539]: Reached target paths.target - Paths. Jan 29 12:13:35.674878 systemd[1539]: Reached target timers.target - Timers. Jan 29 12:13:35.676424 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:13:35.702857 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:13:35.703002 systemd[1539]: Reached target sockets.target - Sockets. Jan 29 12:13:35.703020 systemd[1539]: Reached target basic.target - Basic System. Jan 29 12:13:35.703358 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:13:35.704987 systemd[1539]: Reached target default.target - Main User Target. Jan 29 12:13:35.705029 systemd[1539]: Startup finished in 192ms. Jan 29 12:13:35.712759 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:13:36.132432 systemd[1]: Started sshd@1-172.24.4.137:22-172.24.4.1:43828.service - OpenSSH per-connection server daemon (172.24.4.1:43828). Jan 29 12:13:36.753088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:13:36.762523 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:13:37.455623 sshd[1551]: Accepted publickey for core from 172.24.4.1 port 43828 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:37.458071 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:37.470729 systemd-logind[1441]: New session 2 of user core. Jan 29 12:13:37.483560 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:13:38.075129 sshd[1564]: Connection closed by 172.24.4.1 port 43828 Jan 29 12:13:38.076002 sshd-session[1551]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:38.093607 systemd[1]: sshd@1-172.24.4.137:22-172.24.4.1:43828.service: Deactivated successfully. Jan 29 12:13:38.098879 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:13:38.103438 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:13:38.114267 systemd[1]: Started sshd@2-172.24.4.137:22-172.24.4.1:43830.service - OpenSSH per-connection server daemon (172.24.4.1:43830). Jan 29 12:13:38.118476 systemd-logind[1441]: Removed session 2. Jan 29 12:13:38.574229 kubelet[1559]: E0129 12:13:38.574157 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:13:38.577706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:13:38.578084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:13:38.578597 systemd[1]: kubelet.service: Consumed 2.025s CPU time. Jan 29 12:13:38.801775 login[1520]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:13:38.802614 login[1519]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:13:38.812737 systemd-logind[1441]: New session 4 of user core. Jan 29 12:13:38.827463 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:13:38.834405 systemd-logind[1441]: New session 3 of user core. Jan 29 12:13:38.846470 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:13:39.397122 sshd[1569]: Accepted publickey for core from 172.24.4.1 port 43830 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:39.400167 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:39.410587 systemd-logind[1441]: New session 5 of user core. Jan 29 12:13:39.425491 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:13:40.042542 sshd[1599]: Connection closed by 172.24.4.1 port 43830 Jan 29 12:13:40.043738 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:40.050525 systemd[1]: sshd@2-172.24.4.137:22-172.24.4.1:43830.service: Deactivated successfully. Jan 29 12:13:40.055454 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:13:40.059448 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:13:40.062636 systemd-logind[1441]: Removed session 5. Jan 29 12:13:40.085406 coreos-metadata[1431]: Jan 29 12:13:40.085 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:13:40.135273 coreos-metadata[1431]: Jan 29 12:13:40.135 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 12:13:40.474247 coreos-metadata[1494]: Jan 29 12:13:40.474 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:13:40.517379 coreos-metadata[1494]: Jan 29 12:13:40.517 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 12:13:40.600790 coreos-metadata[1431]: Jan 29 12:13:40.600 INFO Fetch successful Jan 29 12:13:40.600790 coreos-metadata[1431]: Jan 29 12:13:40.600 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:13:40.614191 coreos-metadata[1431]: Jan 29 12:13:40.614 INFO Fetch successful Jan 29 12:13:40.614191 coreos-metadata[1431]: Jan 29 12:13:40.614 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 12:13:40.627346 coreos-metadata[1431]: Jan 29 12:13:40.627 INFO Fetch successful Jan 29 12:13:40.627346 coreos-metadata[1431]: Jan 29 12:13:40.627 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 12:13:40.639999 coreos-metadata[1431]: Jan 29 12:13:40.639 INFO Fetch successful Jan 29 12:13:40.639999 coreos-metadata[1431]: Jan 29 12:13:40.640 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 12:13:40.651413 coreos-metadata[1431]: Jan 29 12:13:40.651 INFO Fetch successful Jan 29 12:13:40.651413 coreos-metadata[1431]: Jan 29 12:13:40.651 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 12:13:40.662179 coreos-metadata[1431]: Jan 29 12:13:40.662 INFO Fetch successful Jan 29 12:13:40.714887 coreos-metadata[1494]: Jan 29 12:13:40.714 INFO Fetch successful Jan 29 12:13:40.714887 coreos-metadata[1494]: Jan 29 12:13:40.714 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:13:40.715155 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:13:40.717557 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:13:40.729608 coreos-metadata[1494]: Jan 29 12:13:40.729 INFO Fetch successful Jan 29 12:13:40.744583 unknown[1494]: wrote ssh authorized keys file for user: core Jan 29 12:13:40.793545 update-ssh-keys[1612]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:13:40.794445 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:13:40.798518 systemd[1]: Finished sshkeys.service. Jan 29 12:13:40.803234 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:13:40.803735 systemd[1]: Startup finished in 1.192s (kernel) + 15.428s (initrd) + 11.135s (userspace) = 27.756s. Jan 29 12:13:48.792568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:13:48.804318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:13:49.068201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:13:49.068254 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:13:49.107580 kubelet[1624]: E0129 12:13:49.107479 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:13:49.110182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:13:49.110453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:13:50.073663 systemd[1]: Started sshd@3-172.24.4.137:22-172.24.4.1:50440.service - OpenSSH per-connection server daemon (172.24.4.1:50440). Jan 29 12:13:51.345104 sshd[1632]: Accepted publickey for core from 172.24.4.1 port 50440 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:51.348152 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:51.359227 systemd-logind[1441]: New session 6 of user core. Jan 29 12:13:51.366279 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:13:51.934494 sshd[1634]: Connection closed by 172.24.4.1 port 50440 Jan 29 12:13:51.934664 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:51.947097 systemd[1]: sshd@3-172.24.4.137:22-172.24.4.1:50440.service: Deactivated successfully. Jan 29 12:13:51.949847 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:13:51.951351 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:13:51.965539 systemd[1]: Started sshd@4-172.24.4.137:22-172.24.4.1:50452.service - OpenSSH per-connection server daemon (172.24.4.1:50452). Jan 29 12:13:51.968167 systemd-logind[1441]: Removed session 6. Jan 29 12:13:53.332319 sshd[1639]: Accepted publickey for core from 172.24.4.1 port 50452 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:53.335397 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:53.346519 systemd-logind[1441]: New session 7 of user core. Jan 29 12:13:53.359233 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:13:53.934349 sshd[1641]: Connection closed by 172.24.4.1 port 50452 Jan 29 12:13:53.934651 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:53.946285 systemd[1]: sshd@4-172.24.4.137:22-172.24.4.1:50452.service: Deactivated successfully. Jan 29 12:13:53.949164 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:13:53.952280 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:13:53.959477 systemd[1]: Started sshd@5-172.24.4.137:22-172.24.4.1:38878.service - OpenSSH per-connection server daemon (172.24.4.1:38878). Jan 29 12:13:53.962641 systemd-logind[1441]: Removed session 7. Jan 29 12:13:55.378125 sshd[1646]: Accepted publickey for core from 172.24.4.1 port 38878 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:55.380796 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:55.391054 systemd-logind[1441]: New session 8 of user core. Jan 29 12:13:55.398222 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:13:56.073862 sshd[1648]: Connection closed by 172.24.4.1 port 38878 Jan 29 12:13:56.073709 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:56.083333 systemd[1]: sshd@5-172.24.4.137:22-172.24.4.1:38878.service: Deactivated successfully. Jan 29 12:13:56.086218 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:13:56.087807 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:13:56.096639 systemd[1]: Started sshd@6-172.24.4.137:22-172.24.4.1:38886.service - OpenSSH per-connection server daemon (172.24.4.1:38886). Jan 29 12:13:56.100567 systemd-logind[1441]: Removed session 8. Jan 29 12:13:57.250317 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 38886 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:57.254080 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:57.266767 systemd-logind[1441]: New session 9 of user core. Jan 29 12:13:57.273459 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:13:57.721125 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:13:57.721836 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:13:57.744202 sudo[1656]: pam_unix(sudo:session): session closed for user root Jan 29 12:13:57.894612 sshd[1655]: Connection closed by 172.24.4.1 port 38886 Jan 29 12:13:57.895194 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:57.904391 systemd[1]: sshd@6-172.24.4.137:22-172.24.4.1:38886.service: Deactivated successfully. Jan 29 12:13:57.906741 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:13:57.908627 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:13:57.914509 systemd[1]: Started sshd@7-172.24.4.137:22-172.24.4.1:38888.service - OpenSSH per-connection server daemon (172.24.4.1:38888). Jan 29 12:13:57.917500 systemd-logind[1441]: Removed session 9. Jan 29 12:13:59.293333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:13:59.302345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:13:59.340007 sshd[1661]: Accepted publickey for core from 172.24.4.1 port 38888 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:13:59.340733 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:59.353484 systemd-logind[1441]: New session 10 of user core. Jan 29 12:13:59.357276 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:13:59.625973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:13:59.629814 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:13:59.809121 kubelet[1672]: E0129 12:13:59.808995 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:13:59.812452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:13:59.812742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:13:59.943722 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:13:59.945156 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:13:59.952144 sudo[1680]: pam_unix(sudo:session): session closed for user root Jan 29 12:13:59.963180 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 12:13:59.963803 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:13:59.988609 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 12:14:00.060011 augenrules[1702]: No rules Jan 29 12:14:00.061115 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:14:00.061456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 12:14:00.064449 sudo[1679]: pam_unix(sudo:session): session closed for user root Jan 29 12:14:00.207788 sshd[1666]: Connection closed by 172.24.4.1 port 38888 Jan 29 12:14:00.208370 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jan 29 12:14:00.218844 systemd[1]: sshd@7-172.24.4.137:22-172.24.4.1:38888.service: Deactivated successfully. Jan 29 12:14:00.222365 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:14:00.224135 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:14:00.229579 systemd[1]: Started sshd@8-172.24.4.137:22-172.24.4.1:38890.service - OpenSSH per-connection server daemon (172.24.4.1:38890). Jan 29 12:14:00.232610 systemd-logind[1441]: Removed session 10. Jan 29 12:14:01.663287 sshd[1710]: Accepted publickey for core from 172.24.4.1 port 38890 ssh2: RSA SHA256:3zxyn8GTxln78fZPvADYDU0Y6VpYL5FrRdlm8jwk4vY Jan 29 12:14:01.665851 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:01.677304 systemd-logind[1441]: New session 11 of user core. Jan 29 12:14:01.692262 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:14:02.178250 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:14:02.179590 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:14:03.462183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:03.483769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:14:03.563015 systemd[1]: Reloading requested from client PID 1746 ('systemctl') (unit session-11.scope)... Jan 29 12:14:03.563145 systemd[1]: Reloading... Jan 29 12:14:03.660966 zram_generator::config[1787]: No configuration found. Jan 29 12:14:03.972222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:14:04.053526 systemd[1]: Reloading finished in 490 ms. Jan 29 12:14:04.100824 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:14:04.100901 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:14:04.101264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:04.105219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:14:04.219765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:04.227489 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:14:04.307988 kubelet[1849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:14:04.307988 kubelet[1849]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 12:14:04.307988 kubelet[1849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:14:04.308655 kubelet[1849]: I0129 12:14:04.308030 1849 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:14:04.871757 kubelet[1849]: I0129 12:14:04.871720 1849 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 12:14:04.871941 kubelet[1849]: I0129 12:14:04.871914 1849 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:14:04.872509 kubelet[1849]: I0129 12:14:04.872493 1849 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 12:14:04.907632 kubelet[1849]: I0129 12:14:04.907602 1849 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:14:04.919765 kubelet[1849]: E0129 12:14:04.919734 1849 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:14:04.919950 kubelet[1849]: I0129 12:14:04.919905 1849 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:14:04.922827 kubelet[1849]: I0129 12:14:04.922646 1849 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:14:04.923074 kubelet[1849]: I0129 12:14:04.922883 1849 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:14:04.923146 kubelet[1849]: I0129 12:14:04.922907 1849 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.137","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:14:04.923146 kubelet[1849]: I0129 12:14:04.923093 1849 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:14:04.923146 kubelet[1849]: I0129 12:14:04.923103 1849 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 12:14:04.923470 kubelet[1849]: I0129 12:14:04.923199 1849 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:14:04.929380 kubelet[1849]: I0129 12:14:04.929345 1849 kubelet.go:446] "Attempting to sync node with API server" Jan 29 12:14:04.929380 kubelet[1849]: I0129 12:14:04.929366 1849 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:14:04.929380 kubelet[1849]: I0129 12:14:04.929382 1849 kubelet.go:352] "Adding apiserver pod source" Jan 29 12:14:04.929380 kubelet[1849]: I0129 12:14:04.929393 1849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:14:04.936204 kubelet[1849]: E0129 12:14:04.936153 1849 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:04.937001 kubelet[1849]: E0129 12:14:04.936425 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:04.937157 kubelet[1849]: I0129 12:14:04.937079 1849 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 12:14:04.937553 kubelet[1849]: I0129 12:14:04.937502 1849 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:14:04.937632 kubelet[1849]: W0129 12:14:04.937557 1849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:14:04.940479 kubelet[1849]: I0129 12:14:04.940433 1849 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 12:14:04.940479 kubelet[1849]: I0129 12:14:04.940467 1849 server.go:1287] "Started kubelet" Jan 29 12:14:04.941991 kubelet[1849]: I0129 12:14:04.941943 1849 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:14:04.942874 kubelet[1849]: I0129 12:14:04.942833 1849 server.go:490] "Adding debug handlers to kubelet server" Jan 29 12:14:04.943751 kubelet[1849]: I0129 12:14:04.943692 1849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:14:04.943907 kubelet[1849]: I0129 12:14:04.943874 1849 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:14:04.944271 kubelet[1849]: I0129 12:14:04.944240 1849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:14:04.952997 kubelet[1849]: I0129 12:14:04.951834 1849 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 12:14:04.952997 kubelet[1849]: I0129 12:14:04.952465 1849 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:14:04.953541 kubelet[1849]: I0129 12:14:04.953500 1849 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:14:04.953642 kubelet[1849]: I0129 12:14:04.953608 1849 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:14:04.957009 kubelet[1849]: E0129 12:14:04.956974 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:04.962463 kubelet[1849]: W0129 12:14:04.962249 1849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 12:14:04.962463 kubelet[1849]: E0129 12:14:04.962290 1849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 12:14:04.964549 kubelet[1849]: E0129 12:14:04.962327 1849 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.137.181f28cda8029873 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.137,UID:172.24.4.137,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.137,},FirstTimestamp:2025-01-29 12:14:04.940449907 +0000 UTC m=+0.709491893,LastTimestamp:2025-01-29 12:14:04.940449907 +0000 UTC m=+0.709491893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.137,}" Jan 29 12:14:04.965322 kubelet[1849]: W0129 12:14:04.965272 1849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.137" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 12:14:04.965322 kubelet[1849]: E0129 12:14:04.965302 1849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.137\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 12:14:04.965761 kubelet[1849]: E0129 12:14:04.965518 1849 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.137\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 12:14:04.965761 kubelet[1849]: W0129 12:14:04.965593 1849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 12:14:04.965761 kubelet[1849]: E0129 12:14:04.965609 1849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 12:14:04.970141 kubelet[1849]: I0129 12:14:04.967269 1849 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:14:04.970141 kubelet[1849]: I0129 12:14:04.967405 1849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:14:04.970898 kubelet[1849]: I0129 12:14:04.970485 1849 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:14:04.992404 kubelet[1849]: I0129 12:14:04.992363 1849 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 12:14:04.993075 kubelet[1849]: I0129 12:14:04.992625 1849 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 12:14:04.993075 kubelet[1849]: I0129 12:14:04.992667 1849 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:14:05.000971 kubelet[1849]: I0129 12:14:05.000531 1849 policy_none.go:49] "None policy: Start" Jan 29 12:14:05.000971 kubelet[1849]: I0129 12:14:05.000570 1849 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 12:14:05.000971 kubelet[1849]: I0129 12:14:05.000593 1849 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:14:05.009328 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:14:05.020761 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:14:05.024994 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:14:05.041514 kubelet[1849]: I0129 12:14:05.040857 1849 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:14:05.041514 kubelet[1849]: I0129 12:14:05.041042 1849 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:14:05.041514 kubelet[1849]: I0129 12:14:05.041053 1849 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:14:05.041514 kubelet[1849]: I0129 12:14:05.041392 1849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:14:05.043749 kubelet[1849]: E0129 12:14:05.043715 1849 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 12:14:05.043841 kubelet[1849]: E0129 12:14:05.043756 1849 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.137\" not found" Jan 29 12:14:05.065042 kubelet[1849]: I0129 12:14:05.064981 1849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:14:05.066657 kubelet[1849]: I0129 12:14:05.066333 1849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:14:05.066657 kubelet[1849]: I0129 12:14:05.066352 1849 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 12:14:05.066657 kubelet[1849]: I0129 12:14:05.066388 1849 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 12:14:05.066657 kubelet[1849]: I0129 12:14:05.066395 1849 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 12:14:05.066657 kubelet[1849]: E0129 12:14:05.066480 1849 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 12:14:05.145058 kubelet[1849]: I0129 12:14:05.142726 1849 kubelet_node_status.go:76] "Attempting to register node" node="172.24.4.137" Jan 29 12:14:05.152344 kubelet[1849]: I0129 12:14:05.152296 1849 kubelet_node_status.go:79] "Successfully registered node" node="172.24.4.137" Jan 29 12:14:05.152650 kubelet[1849]: E0129 12:14:05.152357 1849 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.24.4.137\": node \"172.24.4.137\" not found" Jan 29 12:14:05.167474 kubelet[1849]: E0129 12:14:05.167399 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.268044 kubelet[1849]: E0129 12:14:05.267965 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.272126 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 29 12:14:05.368920 kubelet[1849]: E0129 12:14:05.368828 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.452249 sshd[1712]: Connection closed by 172.24.4.1 port 38890 Jan 29 12:14:05.453111 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 29 12:14:05.461890 systemd[1]: sshd@8-172.24.4.137:22-172.24.4.1:38890.service: Deactivated successfully. Jan 29 12:14:05.465502 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:14:05.466102 systemd[1]: session-11.scope: Consumed 1.060s CPU time, 75.3M memory peak, 0B memory swap peak. Jan 29 12:14:05.467671 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:14:05.469826 kubelet[1849]: E0129 12:14:05.469675 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.470811 systemd-logind[1441]: Removed session 11. Jan 29 12:14:05.570107 kubelet[1849]: E0129 12:14:05.570043 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.670486 kubelet[1849]: E0129 12:14:05.670428 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.771021 kubelet[1849]: E0129 12:14:05.770839 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.871393 kubelet[1849]: E0129 12:14:05.871353 1849 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.137\" not found" Jan 29 12:14:05.876778 kubelet[1849]: I0129 12:14:05.876676 1849 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 12:14:05.877074 kubelet[1849]: W0129 12:14:05.876891 1849 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:14:05.933835 kubelet[1849]: I0129 12:14:05.933768 1849 apiserver.go:52] "Watching apiserver" Jan 29 12:14:05.936986 kubelet[1849]: E0129 12:14:05.936592 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:05.937541 kubelet[1849]: E0129 12:14:05.937486 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:05.955604 kubelet[1849]: I0129 12:14:05.955488 1849 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:14:05.959437 systemd[1]: Created slice kubepods-besteffort-pod2f1d6059_bcc2_441b_8a97_e92d3a9f4d27.slice - libcontainer container kubepods-besteffort-pod2f1d6059_bcc2_441b_8a97_e92d3a9f4d27.slice. Jan 29 12:14:05.962565 kubelet[1849]: I0129 12:14:05.961145 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f1d6059-bcc2-441b-8a97-e92d3a9f4d27-kube-proxy\") pod \"kube-proxy-xc4w8\" (UID: \"2f1d6059-bcc2-441b-8a97-e92d3a9f4d27\") " pod="kube-system/kube-proxy-xc4w8" Jan 29 12:14:05.962565 kubelet[1849]: I0129 12:14:05.961246 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-policysync\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.962565 kubelet[1849]: I0129 12:14:05.961297 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-var-run-calico\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.962565 kubelet[1849]: I0129 12:14:05.961340 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-cni-net-dir\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.962565 kubelet[1849]: I0129 12:14:05.961383 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87fda5ff-7ae8-4a23-9f68-7722f3966cdc-kubelet-dir\") pod \"csi-node-driver-w9pfc\" (UID: \"87fda5ff-7ae8-4a23-9f68-7722f3966cdc\") " pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:05.963116 kubelet[1849]: I0129 12:14:05.961417 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f1d6059-bcc2-441b-8a97-e92d3a9f4d27-xtables-lock\") pod \"kube-proxy-xc4w8\" (UID: \"2f1d6059-bcc2-441b-8a97-e92d3a9f4d27\") " pod="kube-system/kube-proxy-xc4w8" Jan 29 12:14:05.963116 kubelet[1849]: I0129 12:14:05.961470 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-tigera-ca-bundle\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963116 kubelet[1849]: I0129 12:14:05.961511 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-var-lib-calico\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963116 kubelet[1849]: I0129 12:14:05.961554 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/87fda5ff-7ae8-4a23-9f68-7722f3966cdc-socket-dir\") pod \"csi-node-driver-w9pfc\" (UID: \"87fda5ff-7ae8-4a23-9f68-7722f3966cdc\") " pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:05.963116 kubelet[1849]: I0129 12:14:05.961595 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-node-certs\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963642 kubelet[1849]: I0129 12:14:05.961626 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-cni-log-dir\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963642 kubelet[1849]: I0129 12:14:05.961667 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f1d6059-bcc2-441b-8a97-e92d3a9f4d27-lib-modules\") pod \"kube-proxy-xc4w8\" (UID: \"2f1d6059-bcc2-441b-8a97-e92d3a9f4d27\") " pod="kube-system/kube-proxy-xc4w8" Jan 29 12:14:05.963642 kubelet[1849]: I0129 12:14:05.961707 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/87fda5ff-7ae8-4a23-9f68-7722f3966cdc-registration-dir\") pod \"csi-node-driver-w9pfc\" (UID: \"87fda5ff-7ae8-4a23-9f68-7722f3966cdc\") " pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:05.963642 kubelet[1849]: I0129 12:14:05.961750 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltqj\" (UniqueName: \"kubernetes.io/projected/87fda5ff-7ae8-4a23-9f68-7722f3966cdc-kube-api-access-nltqj\") pod \"csi-node-driver-w9pfc\" (UID: \"87fda5ff-7ae8-4a23-9f68-7722f3966cdc\") " pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:05.963642 kubelet[1849]: I0129 12:14:05.961784 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq8jz\" (UniqueName: \"kubernetes.io/projected/2f1d6059-bcc2-441b-8a97-e92d3a9f4d27-kube-api-access-nq8jz\") pod \"kube-proxy-xc4w8\" (UID: \"2f1d6059-bcc2-441b-8a97-e92d3a9f4d27\") " pod="kube-system/kube-proxy-xc4w8" Jan 29 12:14:05.963988 kubelet[1849]: I0129 12:14:05.961828 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-lib-modules\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963988 kubelet[1849]: I0129 12:14:05.961867 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-xtables-lock\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963988 kubelet[1849]: I0129 12:14:05.961909 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-cni-bin-dir\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963988 kubelet[1849]: I0129 12:14:05.961989 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-flexvol-driver-host\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.963988 kubelet[1849]: I0129 12:14:05.962025 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/87fda5ff-7ae8-4a23-9f68-7722f3966cdc-varrun\") pod \"csi-node-driver-w9pfc\" (UID: \"87fda5ff-7ae8-4a23-9f68-7722f3966cdc\") " pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:05.964318 kubelet[1849]: I0129 12:14:05.962067 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpbj\" (UniqueName: \"kubernetes.io/projected/1e2e62a4-cbb1-40f7-bd85-d80d84132d01-kube-api-access-rlpbj\") pod \"calico-node-bhbqj\" (UID: \"1e2e62a4-cbb1-40f7-bd85-d80d84132d01\") " pod="calico-system/calico-node-bhbqj" Jan 29 12:14:05.978001 kubelet[1849]: I0129 12:14:05.976096 1849 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 12:14:05.978997 containerd[1462]: time="2025-01-29T12:14:05.978713019Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:14:05.982725 kubelet[1849]: I0129 12:14:05.981257 1849 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 12:14:05.984483 systemd[1]: Created slice kubepods-besteffort-pod1e2e62a4_cbb1_40f7_bd85_d80d84132d01.slice - libcontainer container kubepods-besteffort-pod1e2e62a4_cbb1_40f7_bd85_d80d84132d01.slice. Jan 29 12:14:06.067277 kubelet[1849]: E0129 12:14:06.067130 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.067277 kubelet[1849]: W0129 12:14:06.067209 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.067484 kubelet[1849]: E0129 12:14:06.067267 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.068722 kubelet[1849]: E0129 12:14:06.067842 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.068722 kubelet[1849]: W0129 12:14:06.067874 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.068722 kubelet[1849]: E0129 12:14:06.068189 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.068722 kubelet[1849]: E0129 12:14:06.068656 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.069315 kubelet[1849]: W0129 12:14:06.068730 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.069315 kubelet[1849]: E0129 12:14:06.068754 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.069906 kubelet[1849]: E0129 12:14:06.069632 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.069906 kubelet[1849]: W0129 12:14:06.069674 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.069906 kubelet[1849]: E0129 12:14:06.069723 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.070319 kubelet[1849]: E0129 12:14:06.070290 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.070469 kubelet[1849]: W0129 12:14:06.070441 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.070669 kubelet[1849]: E0129 12:14:06.070613 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.071440 kubelet[1849]: E0129 12:14:06.071174 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.071440 kubelet[1849]: W0129 12:14:06.071203 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.071440 kubelet[1849]: E0129 12:14:06.071263 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.072632 kubelet[1849]: E0129 12:14:06.072353 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.072632 kubelet[1849]: W0129 12:14:06.072443 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.072798 kubelet[1849]: E0129 12:14:06.072754 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.073693 kubelet[1849]: E0129 12:14:06.073629 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.073693 kubelet[1849]: W0129 12:14:06.073685 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.074257 kubelet[1849]: E0129 12:14:06.074078 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.074709 kubelet[1849]: E0129 12:14:06.074679 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.075151 kubelet[1849]: W0129 12:14:06.074848 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.075524 kubelet[1849]: E0129 12:14:06.075496 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.075796 kubelet[1849]: W0129 12:14:06.075647 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.076368 kubelet[1849]: E0129 12:14:06.076340 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.076669 kubelet[1849]: W0129 12:14:06.076515 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.077051 kubelet[1849]: E0129 12:14:06.077024 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.077217 kubelet[1849]: W0129 12:14:06.077191 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.077407 kubelet[1849]: E0129 12:14:06.077323 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.078023 kubelet[1849]: E0129 12:14:06.077785 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.078023 kubelet[1849]: W0129 12:14:06.077812 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.078023 kubelet[1849]: E0129 12:14:06.077835 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.084628 kubelet[1849]: E0129 12:14:06.084133 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.084628 kubelet[1849]: W0129 12:14:06.084168 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.084628 kubelet[1849]: E0129 12:14:06.084200 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.084628 kubelet[1849]: E0129 12:14:06.084367 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.084628 kubelet[1849]: E0129 12:14:06.084408 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.087005 kubelet[1849]: E0129 12:14:06.086095 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.087190 kubelet[1849]: W0129 12:14:06.087156 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.090566 kubelet[1849]: E0129 12:14:06.088079 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.091799 kubelet[1849]: E0129 12:14:06.091612 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.095110 kubelet[1849]: E0129 12:14:06.090623 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.095110 kubelet[1849]: W0129 12:14:06.093107 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.095110 kubelet[1849]: E0129 12:14:06.093441 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.095110 kubelet[1849]: W0129 12:14:06.093463 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.095110 kubelet[1849]: E0129 12:14:06.093760 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.095110 kubelet[1849]: W0129 12:14:06.093780 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.095510 kubelet[1849]: E0129 12:14:06.095163 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.095510 kubelet[1849]: W0129 12:14:06.095186 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.095510 kubelet[1849]: E0129 12:14:06.095213 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.097741 kubelet[1849]: E0129 12:14:06.097647 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.097741 kubelet[1849]: W0129 12:14:06.097673 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.097741 kubelet[1849]: E0129 12:14:06.097700 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.097967 kubelet[1849]: E0129 12:14:06.097749 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.097967 kubelet[1849]: E0129 12:14:06.097819 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.097967 kubelet[1849]: E0129 12:14:06.097845 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.103129 kubelet[1849]: E0129 12:14:06.099433 1849 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:14:06.103129 kubelet[1849]: W0129 12:14:06.099466 1849 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:14:06.103129 kubelet[1849]: E0129 12:14:06.099490 1849 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:14:06.277169 containerd[1462]: time="2025-01-29T12:14:06.277102233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc4w8,Uid:2f1d6059-bcc2-441b-8a97-e92d3a9f4d27,Namespace:kube-system,Attempt:0,}" Jan 29 12:14:06.290689 containerd[1462]: time="2025-01-29T12:14:06.290594741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhbqj,Uid:1e2e62a4-cbb1-40f7-bd85-d80d84132d01,Namespace:calico-system,Attempt:0,}" Jan 29 12:14:06.937084 kubelet[1849]: E0129 12:14:06.936968 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:06.967801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777467535.mount: Deactivated successfully. Jan 29 12:14:06.978648 containerd[1462]: time="2025-01-29T12:14:06.978329030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:14:06.982135 containerd[1462]: time="2025-01-29T12:14:06.982041853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:14:06.987191 containerd[1462]: time="2025-01-29T12:14:06.987102982Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:14:06.989254 containerd[1462]: time="2025-01-29T12:14:06.989163040Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:14:06.990413 containerd[1462]: time="2025-01-29T12:14:06.990336533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:14:06.998596 containerd[1462]: time="2025-01-29T12:14:06.998482409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:14:07.001375 containerd[1462]: time="2025-01-29T12:14:07.000972506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 710.141947ms" Jan 29 12:14:07.005318 containerd[1462]: time="2025-01-29T12:14:07.004791546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 727.290603ms" Jan 29 12:14:07.268060 containerd[1462]: time="2025-01-29T12:14:07.267742880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:14:07.268060 containerd[1462]: time="2025-01-29T12:14:07.267799708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:14:07.268060 containerd[1462]: time="2025-01-29T12:14:07.267817441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:07.268060 containerd[1462]: time="2025-01-29T12:14:07.267912933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:07.274950 containerd[1462]: time="2025-01-29T12:14:07.274799072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:14:07.274950 containerd[1462]: time="2025-01-29T12:14:07.274859306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:14:07.274950 containerd[1462]: time="2025-01-29T12:14:07.274879595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:07.275447 containerd[1462]: time="2025-01-29T12:14:07.275219631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:07.422208 systemd[1]: Started cri-containerd-b6e9bf76728cb6222eac451cc272afadd9624dc0f8cc3fcb870e0f187cd56d1a.scope - libcontainer container b6e9bf76728cb6222eac451cc272afadd9624dc0f8cc3fcb870e0f187cd56d1a. Jan 29 12:14:07.437481 systemd[1]: Started cri-containerd-926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08.scope - libcontainer container 926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08. Jan 29 12:14:07.471729 containerd[1462]: time="2025-01-29T12:14:07.471673893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhbqj,Uid:1e2e62a4-cbb1-40f7-bd85-d80d84132d01,Namespace:calico-system,Attempt:0,} returns sandbox id \"926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08\"" Jan 29 12:14:07.473807 containerd[1462]: time="2025-01-29T12:14:07.473748987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc4w8,Uid:2f1d6059-bcc2-441b-8a97-e92d3a9f4d27,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6e9bf76728cb6222eac451cc272afadd9624dc0f8cc3fcb870e0f187cd56d1a\"" Jan 29 12:14:07.475550 containerd[1462]: time="2025-01-29T12:14:07.475337077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:14:07.937592 kubelet[1849]: E0129 12:14:07.937419 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:08.068144 kubelet[1849]: E0129 12:14:08.067923 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:08.938739 kubelet[1849]: E0129 12:14:08.938621 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:09.013997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162087979.mount: Deactivated successfully. Jan 29 12:14:09.134262 containerd[1462]: time="2025-01-29T12:14:09.134203988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:09.135503 containerd[1462]: time="2025-01-29T12:14:09.135366054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 12:14:09.136684 containerd[1462]: time="2025-01-29T12:14:09.136652705Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:09.139679 containerd[1462]: time="2025-01-29T12:14:09.139616920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:09.140785 containerd[1462]: time="2025-01-29T12:14:09.140577914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.665193157s" Jan 29 12:14:09.140785 containerd[1462]: time="2025-01-29T12:14:09.140609694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:14:09.142213 containerd[1462]: time="2025-01-29T12:14:09.142172820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 12:14:09.143246 containerd[1462]: time="2025-01-29T12:14:09.143199679Z" level=info msg="CreateContainer within sandbox \"926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:14:09.164879 containerd[1462]: time="2025-01-29T12:14:09.164783237Z" level=info msg="CreateContainer within sandbox \"926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5\"" Jan 29 12:14:09.166017 containerd[1462]: time="2025-01-29T12:14:09.165613332Z" level=info msg="StartContainer for \"6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5\"" Jan 29 12:14:09.205091 systemd[1]: Started cri-containerd-6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5.scope - libcontainer container 6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5. Jan 29 12:14:09.234665 containerd[1462]: time="2025-01-29T12:14:09.234581049Z" level=info msg="StartContainer for \"6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5\" returns successfully" Jan 29 12:14:09.242012 systemd[1]: cri-containerd-6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5.scope: Deactivated successfully. Jan 29 12:14:09.477922 containerd[1462]: time="2025-01-29T12:14:09.477370941Z" level=info msg="shim disconnected" id=6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5 namespace=k8s.io Jan 29 12:14:09.477922 containerd[1462]: time="2025-01-29T12:14:09.477471772Z" level=warning msg="cleaning up after shim disconnected" id=6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5 namespace=k8s.io Jan 29 12:14:09.477922 containerd[1462]: time="2025-01-29T12:14:09.477494145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:14:09.942293 systemd[1]: run-containerd-runc-k8s.io-6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5-runc.L7N04d.mount: Deactivated successfully. Jan 29 12:14:09.942513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6be2b10c33f065abec29b9a8d520ca1fb32914371278888ec2b30f5d1ca159e5-rootfs.mount: Deactivated successfully. Jan 29 12:14:09.943109 kubelet[1849]: E0129 12:14:09.943073 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:10.067427 kubelet[1849]: E0129 12:14:10.067376 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:10.642611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233006961.mount: Deactivated successfully. Jan 29 12:14:10.943443 kubelet[1849]: E0129 12:14:10.943334 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:11.200113 containerd[1462]: time="2025-01-29T12:14:11.199869594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:11.200975 containerd[1462]: time="2025-01-29T12:14:11.200837557Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 29 12:14:11.202049 containerd[1462]: time="2025-01-29T12:14:11.201980203Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:11.205889 containerd[1462]: time="2025-01-29T12:14:11.205084425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:11.205889 containerd[1462]: time="2025-01-29T12:14:11.205781607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.063581044s" Jan 29 12:14:11.205889 containerd[1462]: time="2025-01-29T12:14:11.205809930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 12:14:11.207276 containerd[1462]: time="2025-01-29T12:14:11.207257323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:14:11.208550 containerd[1462]: time="2025-01-29T12:14:11.208527910Z" level=info msg="CreateContainer within sandbox \"b6e9bf76728cb6222eac451cc272afadd9624dc0f8cc3fcb870e0f187cd56d1a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:14:11.229808 containerd[1462]: time="2025-01-29T12:14:11.229777448Z" level=info msg="CreateContainer within sandbox \"b6e9bf76728cb6222eac451cc272afadd9624dc0f8cc3fcb870e0f187cd56d1a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29cd1c8c4e1262f7b653df884719201dd9014b0873af2d7239b39dac8657aa61\"" Jan 29 12:14:11.230522 containerd[1462]: time="2025-01-29T12:14:11.230501901Z" level=info msg="StartContainer for \"29cd1c8c4e1262f7b653df884719201dd9014b0873af2d7239b39dac8657aa61\"" Jan 29 12:14:11.259671 systemd[1]: run-containerd-runc-k8s.io-29cd1c8c4e1262f7b653df884719201dd9014b0873af2d7239b39dac8657aa61-runc.fffsSL.mount: Deactivated successfully. Jan 29 12:14:11.270137 systemd[1]: Started cri-containerd-29cd1c8c4e1262f7b653df884719201dd9014b0873af2d7239b39dac8657aa61.scope - libcontainer container 29cd1c8c4e1262f7b653df884719201dd9014b0873af2d7239b39dac8657aa61. Jan 29 12:14:11.304081 containerd[1462]: time="2025-01-29T12:14:11.303947168Z" level=info msg="StartContainer for \"29cd1c8c4e1262f7b653df884719201dd9014b0873af2d7239b39dac8657aa61\" returns successfully" Jan 29 12:14:11.944080 kubelet[1849]: E0129 12:14:11.944013 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:12.067698 kubelet[1849]: E0129 12:14:12.067490 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:12.240040 kubelet[1849]: I0129 12:14:12.239717 1849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xc4w8" podStartSLOduration=3.507948077 podStartE2EDuration="7.239684386s" podCreationTimestamp="2025-01-29 12:14:05 +0000 UTC" firstStartedPulling="2025-01-29 12:14:07.475144681 +0000 UTC m=+3.244186677" lastFinishedPulling="2025-01-29 12:14:11.206881 +0000 UTC m=+6.975922986" observedRunningTime="2025-01-29 12:14:12.239351375 +0000 UTC m=+8.008393412" watchObservedRunningTime="2025-01-29 12:14:12.239684386 +0000 UTC m=+8.008726422" Jan 29 12:14:12.944916 kubelet[1849]: E0129 12:14:12.944866 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:13.947079 kubelet[1849]: E0129 12:14:13.946988 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:14.067233 kubelet[1849]: E0129 12:14:14.067143 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:14.948121 kubelet[1849]: E0129 12:14:14.948045 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:15.948647 kubelet[1849]: E0129 12:14:15.948590 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:16.068094 kubelet[1849]: E0129 12:14:16.067664 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:16.529317 containerd[1462]: time="2025-01-29T12:14:16.529084596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:16.530883 containerd[1462]: time="2025-01-29T12:14:16.530798245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:14:16.531532 containerd[1462]: time="2025-01-29T12:14:16.531426151Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:16.534676 containerd[1462]: time="2025-01-29T12:14:16.534582906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:16.535536 containerd[1462]: time="2025-01-29T12:14:16.535481044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.327953148s" Jan 29 12:14:16.535536 containerd[1462]: time="2025-01-29T12:14:16.535522802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:14:16.539260 containerd[1462]: time="2025-01-29T12:14:16.539223395Z" level=info msg="CreateContainer within sandbox \"926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:14:16.562775 containerd[1462]: time="2025-01-29T12:14:16.562676522Z" level=info msg="CreateContainer within sandbox \"926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0\"" Jan 29 12:14:16.563476 containerd[1462]: time="2025-01-29T12:14:16.563230970Z" level=info msg="StartContainer for \"c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0\"" Jan 29 12:14:16.599143 systemd[1]: Started cri-containerd-c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0.scope - libcontainer container c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0. Jan 29 12:14:16.629840 containerd[1462]: time="2025-01-29T12:14:16.629727622Z" level=info msg="StartContainer for \"c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0\" returns successfully" Jan 29 12:14:16.949553 kubelet[1849]: E0129 12:14:16.949466 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:17.786826 containerd[1462]: time="2025-01-29T12:14:17.786712701Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:14:17.792056 systemd[1]: cri-containerd-c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0.scope: Deactivated successfully. Jan 29 12:14:17.799156 kubelet[1849]: I0129 12:14:17.799120 1849 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 12:14:17.844785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0-rootfs.mount: Deactivated successfully. Jan 29 12:14:17.950398 kubelet[1849]: E0129 12:14:17.950319 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:18.047695 update_engine[1442]: I20250129 12:14:18.047502 1442 update_attempter.cc:509] Updating boot flags... Jan 29 12:14:18.080637 systemd[1]: Created slice kubepods-besteffort-pod87fda5ff_7ae8_4a23_9f68_7722f3966cdc.slice - libcontainer container kubepods-besteffort-pod87fda5ff_7ae8_4a23_9f68_7722f3966cdc.slice. Jan 29 12:14:18.087040 containerd[1462]: time="2025-01-29T12:14:18.086672290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:0,}" Jan 29 12:14:18.951104 kubelet[1849]: E0129 12:14:18.951027 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:19.170055 containerd[1462]: time="2025-01-29T12:14:19.169687082Z" level=info msg="shim disconnected" id=c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0 namespace=k8s.io Jan 29 12:14:19.170055 containerd[1462]: time="2025-01-29T12:14:19.169791339Z" level=warning msg="cleaning up after shim disconnected" id=c0f23d8633869561542008fc32278b6aa6ea79b1c132bc19de8fba6f68c7daa0 namespace=k8s.io Jan 29 12:14:19.170055 containerd[1462]: time="2025-01-29T12:14:19.169813480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:14:19.237979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2318) Jan 29 12:14:19.303963 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2322) Jan 29 12:14:19.345348 containerd[1462]: time="2025-01-29T12:14:19.345310828Z" level=error msg="Failed to destroy network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:19.347156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336-shm.mount: Deactivated successfully. Jan 29 12:14:19.347384 containerd[1462]: time="2025-01-29T12:14:19.347149657Z" level=error msg="encountered an error cleaning up failed sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:19.347384 containerd[1462]: time="2025-01-29T12:14:19.347243676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:19.347580 kubelet[1849]: E0129 12:14:19.347503 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:19.347635 kubelet[1849]: E0129 12:14:19.347590 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:19.347672 kubelet[1849]: E0129 12:14:19.347616 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:19.347700 kubelet[1849]: E0129 12:14:19.347680 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:19.951382 kubelet[1849]: E0129 12:14:19.951247 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:20.242860 containerd[1462]: time="2025-01-29T12:14:20.242696537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:14:20.244615 kubelet[1849]: I0129 12:14:20.243983 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336" Jan 29 12:14:20.245774 containerd[1462]: time="2025-01-29T12:14:20.245690375Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:20.246765 containerd[1462]: time="2025-01-29T12:14:20.246148359Z" level=info msg="Ensure that sandbox 307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336 in task-service has been cleanup successfully" Jan 29 12:14:20.250992 containerd[1462]: time="2025-01-29T12:14:20.248097525Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:20.250992 containerd[1462]: time="2025-01-29T12:14:20.248144324Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:20.251367 containerd[1462]: time="2025-01-29T12:14:20.251302301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:1,}" Jan 29 12:14:20.253805 systemd[1]: run-netns-cni\x2d1741e1ea\x2d5472\x2d39f5\x2d49e6\x2d05080a389f48.mount: Deactivated successfully. Jan 29 12:14:20.367477 containerd[1462]: time="2025-01-29T12:14:20.367404807Z" level=error msg="Failed to destroy network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:20.370483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881-shm.mount: Deactivated successfully. Jan 29 12:14:20.370885 containerd[1462]: time="2025-01-29T12:14:20.370384368Z" level=error msg="encountered an error cleaning up failed sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:20.370885 containerd[1462]: time="2025-01-29T12:14:20.370638337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:20.372251 kubelet[1849]: E0129 12:14:20.371126 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:20.372251 kubelet[1849]: E0129 12:14:20.371232 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:20.372251 kubelet[1849]: E0129 12:14:20.371286 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:20.373098 kubelet[1849]: E0129 12:14:20.371503 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:20.952552 kubelet[1849]: E0129 12:14:20.952482 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:21.249151 kubelet[1849]: I0129 12:14:21.248997 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881" Jan 29 12:14:21.250485 containerd[1462]: time="2025-01-29T12:14:21.250410349Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:21.251794 containerd[1462]: time="2025-01-29T12:14:21.251697385Z" level=info msg="Ensure that sandbox ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881 in task-service has been cleanup successfully" Jan 29 12:14:21.254016 containerd[1462]: time="2025-01-29T12:14:21.252276648Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:21.254016 containerd[1462]: time="2025-01-29T12:14:21.252323066Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:21.255025 containerd[1462]: time="2025-01-29T12:14:21.254563571Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:21.255025 containerd[1462]: time="2025-01-29T12:14:21.254738650Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:21.255025 containerd[1462]: time="2025-01-29T12:14:21.254766754Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:21.255834 systemd[1]: run-netns-cni\x2d35202ff1\x2deacc\x2da3d4\x2ddf1d\x2d0aceae8cc184.mount: Deactivated successfully. Jan 29 12:14:21.258815 containerd[1462]: time="2025-01-29T12:14:21.258631732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:2,}" Jan 29 12:14:21.399539 containerd[1462]: time="2025-01-29T12:14:21.399485392Z" level=error msg="Failed to destroy network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:21.401186 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc-shm.mount: Deactivated successfully. Jan 29 12:14:21.402123 containerd[1462]: time="2025-01-29T12:14:21.401248717Z" level=error msg="encountered an error cleaning up failed sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:21.402123 containerd[1462]: time="2025-01-29T12:14:21.401300826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:21.402192 kubelet[1849]: E0129 12:14:21.401678 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:21.402192 kubelet[1849]: E0129 12:14:21.401729 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:21.402192 kubelet[1849]: E0129 12:14:21.401752 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:21.402275 kubelet[1849]: E0129 12:14:21.401794 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:21.953443 kubelet[1849]: E0129 12:14:21.953369 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:22.253901 kubelet[1849]: I0129 12:14:22.253676 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc" Jan 29 12:14:22.257002 containerd[1462]: time="2025-01-29T12:14:22.254363324Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:22.257002 containerd[1462]: time="2025-01-29T12:14:22.254778447Z" level=info msg="Ensure that sandbox 1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc in task-service has been cleanup successfully" Jan 29 12:14:22.257724 containerd[1462]: time="2025-01-29T12:14:22.257678985Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:22.257885 containerd[1462]: time="2025-01-29T12:14:22.257849817Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:22.259608 systemd[1]: run-netns-cni\x2d92619ad3\x2d1aac\x2d881a\x2d5dcb\x2dc46d9b0998f2.mount: Deactivated successfully. Jan 29 12:14:22.260090 containerd[1462]: time="2025-01-29T12:14:22.260043482Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:22.260416 containerd[1462]: time="2025-01-29T12:14:22.260375788Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:22.261089 containerd[1462]: time="2025-01-29T12:14:22.261045301Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:22.264521 containerd[1462]: time="2025-01-29T12:14:22.264070694Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:22.264521 containerd[1462]: time="2025-01-29T12:14:22.264326245Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:22.264521 containerd[1462]: time="2025-01-29T12:14:22.264355350Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:22.265587 containerd[1462]: time="2025-01-29T12:14:22.265537559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:3,}" Jan 29 12:14:22.392868 containerd[1462]: time="2025-01-29T12:14:22.392733844Z" level=error msg="Failed to destroy network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:22.395364 containerd[1462]: time="2025-01-29T12:14:22.393183391Z" level=error msg="encountered an error cleaning up failed sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:22.395364 containerd[1462]: time="2025-01-29T12:14:22.393249606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:22.395510 kubelet[1849]: E0129 12:14:22.395041 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:22.395510 kubelet[1849]: E0129 12:14:22.395091 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:22.395510 kubelet[1849]: E0129 12:14:22.395114 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:22.394765 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494-shm.mount: Deactivated successfully. Jan 29 12:14:22.395663 kubelet[1849]: E0129 12:14:22.395152 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:22.954381 kubelet[1849]: E0129 12:14:22.954315 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:23.263123 kubelet[1849]: I0129 12:14:23.261536 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494" Jan 29 12:14:23.267015 containerd[1462]: time="2025-01-29T12:14:23.264105977Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:14:23.267015 containerd[1462]: time="2025-01-29T12:14:23.264677134Z" level=info msg="Ensure that sandbox 6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494 in task-service has been cleanup successfully" Jan 29 12:14:23.270999 containerd[1462]: time="2025-01-29T12:14:23.268145019Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:14:23.270999 containerd[1462]: time="2025-01-29T12:14:23.268237964Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:14:23.273409 containerd[1462]: time="2025-01-29T12:14:23.271525900Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:23.273409 containerd[1462]: time="2025-01-29T12:14:23.271744542Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:23.273409 containerd[1462]: time="2025-01-29T12:14:23.271784768Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:23.273886 systemd[1]: run-netns-cni\x2d88b5f683\x2d2035\x2d069e\x2ddb8f\x2d895a94da643c.mount: Deactivated successfully. Jan 29 12:14:23.281988 containerd[1462]: time="2025-01-29T12:14:23.279591479Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:23.281988 containerd[1462]: time="2025-01-29T12:14:23.279836351Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:23.281988 containerd[1462]: time="2025-01-29T12:14:23.279871858Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:23.283360 containerd[1462]: time="2025-01-29T12:14:23.283293406Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:23.284707 containerd[1462]: time="2025-01-29T12:14:23.283885962Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:23.285066 containerd[1462]: time="2025-01-29T12:14:23.285006324Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:23.288010 containerd[1462]: time="2025-01-29T12:14:23.287789208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:4,}" Jan 29 12:14:23.396656 containerd[1462]: time="2025-01-29T12:14:23.396596724Z" level=error msg="Failed to destroy network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:23.398414 containerd[1462]: time="2025-01-29T12:14:23.396965037Z" level=error msg="encountered an error cleaning up failed sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:23.398414 containerd[1462]: time="2025-01-29T12:14:23.397018718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:23.398482 kubelet[1849]: E0129 12:14:23.397231 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:23.398482 kubelet[1849]: E0129 12:14:23.397311 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:23.398482 kubelet[1849]: E0129 12:14:23.397988 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:23.398576 kubelet[1849]: E0129 12:14:23.398251 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:23.399190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b-shm.mount: Deactivated successfully. Jan 29 12:14:23.871853 systemd[1]: Created slice kubepods-besteffort-pode20c74cd_bd32_47f3_b39b_858ca4b1cb78.slice - libcontainer container kubepods-besteffort-pode20c74cd_bd32_47f3_b39b_858ca4b1cb78.slice. Jan 29 12:14:23.887887 kubelet[1849]: I0129 12:14:23.887776 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l9zl\" (UniqueName: \"kubernetes.io/projected/e20c74cd-bd32-47f3-b39b-858ca4b1cb78-kube-api-access-9l9zl\") pod \"nginx-deployment-7fcdb87857-qsdft\" (UID: \"e20c74cd-bd32-47f3-b39b-858ca4b1cb78\") " pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:23.955976 kubelet[1849]: E0129 12:14:23.955855 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:24.182872 containerd[1462]: time="2025-01-29T12:14:24.181811110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:0,}" Jan 29 12:14:24.279296 kubelet[1849]: I0129 12:14:24.279274 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b" Jan 29 12:14:24.280173 containerd[1462]: time="2025-01-29T12:14:24.280133421Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:14:24.280766 containerd[1462]: time="2025-01-29T12:14:24.280732199Z" level=info msg="Ensure that sandbox 25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b in task-service has been cleanup successfully" Jan 29 12:14:24.281071 containerd[1462]: time="2025-01-29T12:14:24.281053675Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:14:24.281193 containerd[1462]: time="2025-01-29T12:14:24.281163240Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:14:24.282627 containerd[1462]: time="2025-01-29T12:14:24.281848211Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:14:24.284176 containerd[1462]: time="2025-01-29T12:14:24.284139699Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:14:24.284271 containerd[1462]: time="2025-01-29T12:14:24.284255747Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:14:24.285112 systemd[1]: run-netns-cni\x2d90f4fbac\x2d32c5\x2de60a\x2d6300\x2dc30a4a9e900c.mount: Deactivated successfully. Jan 29 12:14:24.287665 containerd[1462]: time="2025-01-29T12:14:24.287524665Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:24.287665 containerd[1462]: time="2025-01-29T12:14:24.287618392Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:24.287665 containerd[1462]: time="2025-01-29T12:14:24.287631957Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:24.288260 containerd[1462]: time="2025-01-29T12:14:24.288097885Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:24.288260 containerd[1462]: time="2025-01-29T12:14:24.288206440Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:24.288260 containerd[1462]: time="2025-01-29T12:14:24.288220045Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:24.288832 containerd[1462]: time="2025-01-29T12:14:24.288681545Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:24.288920 containerd[1462]: time="2025-01-29T12:14:24.288812381Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:24.289047 containerd[1462]: time="2025-01-29T12:14:24.288991628Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:24.289824 containerd[1462]: time="2025-01-29T12:14:24.289598462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:5,}" Jan 29 12:14:24.342988 containerd[1462]: time="2025-01-29T12:14:24.341559172Z" level=error msg="Failed to destroy network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.343901 containerd[1462]: time="2025-01-29T12:14:24.343303898Z" level=error msg="encountered an error cleaning up failed sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.343901 containerd[1462]: time="2025-01-29T12:14:24.343366777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.344069 kubelet[1849]: E0129 12:14:24.343559 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.344069 kubelet[1849]: E0129 12:14:24.343616 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:24.344069 kubelet[1849]: E0129 12:14:24.343640 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:24.344043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942-shm.mount: Deactivated successfully. Jan 29 12:14:24.344407 kubelet[1849]: E0129 12:14:24.343683 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-qsdft" podUID="e20c74cd-bd32-47f3-b39b-858ca4b1cb78" Jan 29 12:14:24.403579 containerd[1462]: time="2025-01-29T12:14:24.403514500Z" level=error msg="Failed to destroy network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.404399 containerd[1462]: time="2025-01-29T12:14:24.403945402Z" level=error msg="encountered an error cleaning up failed sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.404399 containerd[1462]: time="2025-01-29T12:14:24.404017788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.404529 kubelet[1849]: E0129 12:14:24.404268 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:24.404529 kubelet[1849]: E0129 12:14:24.404328 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:24.404529 kubelet[1849]: E0129 12:14:24.404352 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:24.406233 kubelet[1849]: E0129 12:14:24.404398 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:24.930239 kubelet[1849]: E0129 12:14:24.930159 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:24.956669 kubelet[1849]: E0129 12:14:24.956597 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:25.271884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd-shm.mount: Deactivated successfully. Jan 29 12:14:25.284376 kubelet[1849]: I0129 12:14:25.284346 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd" Jan 29 12:14:25.285753 containerd[1462]: time="2025-01-29T12:14:25.285583754Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:14:25.286626 containerd[1462]: time="2025-01-29T12:14:25.285794079Z" level=info msg="Ensure that sandbox cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd in task-service has been cleanup successfully" Jan 29 12:14:25.286626 containerd[1462]: time="2025-01-29T12:14:25.286558108Z" level=info msg="TearDown network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" successfully" Jan 29 12:14:25.286626 containerd[1462]: time="2025-01-29T12:14:25.286577736Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" returns successfully" Jan 29 12:14:25.288572 containerd[1462]: time="2025-01-29T12:14:25.288513482Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:14:25.288230 systemd[1]: run-netns-cni\x2d4442bf53\x2d2692\x2d255f\x2dd9d8\x2dd0a10402590b.mount: Deactivated successfully. Jan 29 12:14:25.289200 containerd[1462]: time="2025-01-29T12:14:25.288586028Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:14:25.289200 containerd[1462]: time="2025-01-29T12:14:25.288598863Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:14:25.289256 kubelet[1849]: I0129 12:14:25.288759 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942" Jan 29 12:14:25.291125 containerd[1462]: time="2025-01-29T12:14:25.289777683Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:14:25.291125 containerd[1462]: time="2025-01-29T12:14:25.289961649Z" level=info msg="Ensure that sandbox 1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942 in task-service has been cleanup successfully" Jan 29 12:14:25.293244 containerd[1462]: time="2025-01-29T12:14:25.291962447Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:14:25.293244 containerd[1462]: time="2025-01-29T12:14:25.292035936Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:14:25.293244 containerd[1462]: time="2025-01-29T12:14:25.292048480Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:14:25.293244 containerd[1462]: time="2025-01-29T12:14:25.292700447Z" level=info msg="TearDown network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" successfully" Jan 29 12:14:25.293244 containerd[1462]: time="2025-01-29T12:14:25.292720585Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" returns successfully" Jan 29 12:14:25.292490 systemd[1]: run-netns-cni\x2d78d8e734\x2da01b\x2d718d\x2d8493\x2deed174565e81.mount: Deactivated successfully. Jan 29 12:14:25.293448 containerd[1462]: time="2025-01-29T12:14:25.293308493Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:25.293448 containerd[1462]: time="2025-01-29T12:14:25.293378474Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:25.293448 containerd[1462]: time="2025-01-29T12:14:25.293390436Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:25.295005 containerd[1462]: time="2025-01-29T12:14:25.294292827Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:25.295005 containerd[1462]: time="2025-01-29T12:14:25.294680928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:1,}" Jan 29 12:14:25.295213 containerd[1462]: time="2025-01-29T12:14:25.295187101Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:25.295213 containerd[1462]: time="2025-01-29T12:14:25.295209814Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:25.296756 containerd[1462]: time="2025-01-29T12:14:25.296195570Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:25.296756 containerd[1462]: time="2025-01-29T12:14:25.296293133Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:25.296756 containerd[1462]: time="2025-01-29T12:14:25.296306389Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:25.298404 containerd[1462]: time="2025-01-29T12:14:25.297986543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:6,}" Jan 29 12:14:25.436210 containerd[1462]: time="2025-01-29T12:14:25.436148115Z" level=error msg="Failed to destroy network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.437216 containerd[1462]: time="2025-01-29T12:14:25.437187853Z" level=error msg="encountered an error cleaning up failed sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.437273 containerd[1462]: time="2025-01-29T12:14:25.437245903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.437486 kubelet[1849]: E0129 12:14:25.437442 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.437537 kubelet[1849]: E0129 12:14:25.437509 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:25.437572 kubelet[1849]: E0129 12:14:25.437533 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:25.437600 kubelet[1849]: E0129 12:14:25.437576 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:25.440651 containerd[1462]: time="2025-01-29T12:14:25.440452121Z" level=error msg="Failed to destroy network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.441465 containerd[1462]: time="2025-01-29T12:14:25.441073822Z" level=error msg="encountered an error cleaning up failed sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.441465 containerd[1462]: time="2025-01-29T12:14:25.441134145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.441579 kubelet[1849]: E0129 12:14:25.441278 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:25.441579 kubelet[1849]: E0129 12:14:25.441319 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:25.441579 kubelet[1849]: E0129 12:14:25.441339 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:25.441667 kubelet[1849]: E0129 12:14:25.441375 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-qsdft" podUID="e20c74cd-bd32-47f3-b39b-858ca4b1cb78" Jan 29 12:14:25.957394 kubelet[1849]: E0129 12:14:25.957359 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:26.270055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c-shm.mount: Deactivated successfully. Jan 29 12:14:26.270277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103-shm.mount: Deactivated successfully. Jan 29 12:14:26.295952 kubelet[1849]: I0129 12:14:26.295069 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103" Jan 29 12:14:26.296077 containerd[1462]: time="2025-01-29T12:14:26.295623116Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" Jan 29 12:14:26.296077 containerd[1462]: time="2025-01-29T12:14:26.295802153Z" level=info msg="Ensure that sandbox 95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103 in task-service has been cleanup successfully" Jan 29 12:14:26.296380 containerd[1462]: time="2025-01-29T12:14:26.296240970Z" level=info msg="TearDown network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" successfully" Jan 29 12:14:26.296380 containerd[1462]: time="2025-01-29T12:14:26.296258062Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" returns successfully" Jan 29 12:14:26.297997 containerd[1462]: time="2025-01-29T12:14:26.296721173Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:14:26.297997 containerd[1462]: time="2025-01-29T12:14:26.296790074Z" level=info msg="TearDown network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" successfully" Jan 29 12:14:26.297997 containerd[1462]: time="2025-01-29T12:14:26.296802306Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" returns successfully" Jan 29 12:14:26.298657 containerd[1462]: time="2025-01-29T12:14:26.298630158Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:14:26.298721 containerd[1462]: time="2025-01-29T12:14:26.298701552Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:14:26.298721 containerd[1462]: time="2025-01-29T12:14:26.298717743Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:14:26.299020 kubelet[1849]: I0129 12:14:26.298996 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c" Jan 29 12:14:26.299666 systemd[1]: run-netns-cni\x2d95529ae3\x2db2fd\x2dd7dc\x2dfe9a\x2d1a25e801e274.mount: Deactivated successfully. Jan 29 12:14:26.301987 containerd[1462]: time="2025-01-29T12:14:26.301857705Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" Jan 29 12:14:26.302510 containerd[1462]: time="2025-01-29T12:14:26.302482912Z" level=info msg="Ensure that sandbox 7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c in task-service has been cleanup successfully" Jan 29 12:14:26.304513 containerd[1462]: time="2025-01-29T12:14:26.304464904Z" level=info msg="TearDown network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" successfully" Jan 29 12:14:26.304513 containerd[1462]: time="2025-01-29T12:14:26.304488910Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" returns successfully" Jan 29 12:14:26.304603 containerd[1462]: time="2025-01-29T12:14:26.304558570Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:14:26.304636 containerd[1462]: time="2025-01-29T12:14:26.304625116Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:14:26.304666 containerd[1462]: time="2025-01-29T12:14:26.304636978Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:14:26.305673 systemd[1]: run-netns-cni\x2d6fbbd9af\x2dc0c4\x2d441e\x2d81d8\x2d3b9078f27fb8.mount: Deactivated successfully. Jan 29 12:14:26.306026 containerd[1462]: time="2025-01-29T12:14:26.305997951Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:14:26.306194 containerd[1462]: time="2025-01-29T12:14:26.306175515Z" level=info msg="TearDown network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" successfully" Jan 29 12:14:26.306262 containerd[1462]: time="2025-01-29T12:14:26.306247391Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" returns successfully" Jan 29 12:14:26.306389 containerd[1462]: time="2025-01-29T12:14:26.306062722Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:26.306510 containerd[1462]: time="2025-01-29T12:14:26.306493023Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:26.306574 containerd[1462]: time="2025-01-29T12:14:26.306560810Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:26.307303 containerd[1462]: time="2025-01-29T12:14:26.307169307Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:26.307303 containerd[1462]: time="2025-01-29T12:14:26.307242053Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:26.307303 containerd[1462]: time="2025-01-29T12:14:26.307254226Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:26.307773 containerd[1462]: time="2025-01-29T12:14:26.307740532Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:26.307838 containerd[1462]: time="2025-01-29T12:14:26.307819089Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:26.307867 containerd[1462]: time="2025-01-29T12:14:26.307835701Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:26.307963 containerd[1462]: time="2025-01-29T12:14:26.307912586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:2,}" Jan 29 12:14:26.308850 containerd[1462]: time="2025-01-29T12:14:26.308816247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:7,}" Jan 29 12:14:26.457998 containerd[1462]: time="2025-01-29T12:14:26.457738883Z" level=error msg="Failed to destroy network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.459285 containerd[1462]: time="2025-01-29T12:14:26.459121335Z" level=error msg="encountered an error cleaning up failed sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.459285 containerd[1462]: time="2025-01-29T12:14:26.459171891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.459382 kubelet[1849]: E0129 12:14:26.459349 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.459425 kubelet[1849]: E0129 12:14:26.459406 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:26.459452 kubelet[1849]: E0129 12:14:26.459431 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:26.459648 kubelet[1849]: E0129 12:14:26.459473 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:26.483407 containerd[1462]: time="2025-01-29T12:14:26.483374462Z" level=error msg="Failed to destroy network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.484936 containerd[1462]: time="2025-01-29T12:14:26.484784477Z" level=error msg="encountered an error cleaning up failed sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.484936 containerd[1462]: time="2025-01-29T12:14:26.484880888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.485316 kubelet[1849]: E0129 12:14:26.485164 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:26.485316 kubelet[1849]: E0129 12:14:26.485232 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:26.485316 kubelet[1849]: E0129 12:14:26.485254 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:26.485418 kubelet[1849]: E0129 12:14:26.485300 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-qsdft" podUID="e20c74cd-bd32-47f3-b39b-858ca4b1cb78" Jan 29 12:14:26.957993 kubelet[1849]: E0129 12:14:26.957962 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:27.269606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480-shm.mount: Deactivated successfully. Jan 29 12:14:27.306976 kubelet[1849]: I0129 12:14:27.306834 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480" Jan 29 12:14:27.310950 containerd[1462]: time="2025-01-29T12:14:27.307356343Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\"" Jan 29 12:14:27.310950 containerd[1462]: time="2025-01-29T12:14:27.307543736Z" level=info msg="Ensure that sandbox aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480 in task-service has been cleanup successfully" Jan 29 12:14:27.310676 systemd[1]: run-netns-cni\x2db91779d3\x2d93a8\x2d654b\x2de0f8\x2d179ec50140f4.mount: Deactivated successfully. Jan 29 12:14:27.312674 containerd[1462]: time="2025-01-29T12:14:27.311666196Z" level=info msg="TearDown network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" successfully" Jan 29 12:14:27.312674 containerd[1462]: time="2025-01-29T12:14:27.311688017Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" returns successfully" Jan 29 12:14:27.312674 containerd[1462]: time="2025-01-29T12:14:27.312023178Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" Jan 29 12:14:27.312674 containerd[1462]: time="2025-01-29T12:14:27.312094382Z" level=info msg="TearDown network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" successfully" Jan 29 12:14:27.312674 containerd[1462]: time="2025-01-29T12:14:27.312107687Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" returns successfully" Jan 29 12:14:27.314239 containerd[1462]: time="2025-01-29T12:14:27.314093154Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:14:27.314239 containerd[1462]: time="2025-01-29T12:14:27.314160311Z" level=info msg="TearDown network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" successfully" Jan 29 12:14:27.314239 containerd[1462]: time="2025-01-29T12:14:27.314173015Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" returns successfully" Jan 29 12:14:27.314390 kubelet[1849]: I0129 12:14:27.314368 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055" Jan 29 12:14:27.314794 containerd[1462]: time="2025-01-29T12:14:27.314754460Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:14:27.314836 containerd[1462]: time="2025-01-29T12:14:27.314828208Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:14:27.314864 containerd[1462]: time="2025-01-29T12:14:27.314840461Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:14:27.314892 containerd[1462]: time="2025-01-29T12:14:27.314884304Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\"" Jan 29 12:14:27.316274 containerd[1462]: time="2025-01-29T12:14:27.316240316Z" level=info msg="Ensure that sandbox 6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055 in task-service has been cleanup successfully" Jan 29 12:14:27.317392 containerd[1462]: time="2025-01-29T12:14:27.317360034Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:14:27.317588 containerd[1462]: time="2025-01-29T12:14:27.317434734Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:14:27.317588 containerd[1462]: time="2025-01-29T12:14:27.317475822Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:14:27.318814 containerd[1462]: time="2025-01-29T12:14:27.318788193Z" level=info msg="TearDown network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" successfully" Jan 29 12:14:27.318814 containerd[1462]: time="2025-01-29T12:14:27.318809994Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" returns successfully" Jan 29 12:14:27.318946 containerd[1462]: time="2025-01-29T12:14:27.318905664Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:27.319149 containerd[1462]: time="2025-01-29T12:14:27.319127702Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" Jan 29 12:14:27.319211 containerd[1462]: time="2025-01-29T12:14:27.319193586Z" level=info msg="TearDown network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" successfully" Jan 29 12:14:27.319255 containerd[1462]: time="2025-01-29T12:14:27.319208424Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" returns successfully" Jan 29 12:14:27.320386 systemd[1]: run-netns-cni\x2d02464831\x2d0dff\x2d6edc\x2d4463\x2da127346464c6.mount: Deactivated successfully. Jan 29 12:14:27.321091 containerd[1462]: time="2025-01-29T12:14:27.321061041Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:27.321091 containerd[1462]: time="2025-01-29T12:14:27.321082442Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:27.321442 containerd[1462]: time="2025-01-29T12:14:27.321341149Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:14:27.321442 containerd[1462]: time="2025-01-29T12:14:27.321417362Z" level=info msg="TearDown network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" successfully" Jan 29 12:14:27.321442 containerd[1462]: time="2025-01-29T12:14:27.321428913Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" returns successfully" Jan 29 12:14:27.321540 containerd[1462]: time="2025-01-29T12:14:27.321511579Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:27.322057 containerd[1462]: time="2025-01-29T12:14:27.321566272Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:27.322057 containerd[1462]: time="2025-01-29T12:14:27.321583184Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:27.322343 containerd[1462]: time="2025-01-29T12:14:27.322314170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:3,}" Jan 29 12:14:27.322888 containerd[1462]: time="2025-01-29T12:14:27.322642519Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:27.322888 containerd[1462]: time="2025-01-29T12:14:27.322709815Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:27.322888 containerd[1462]: time="2025-01-29T12:14:27.322720675Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:27.326303 containerd[1462]: time="2025-01-29T12:14:27.326129031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:8,}" Jan 29 12:14:27.450600 containerd[1462]: time="2025-01-29T12:14:27.450476147Z" level=error msg="Failed to destroy network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.451428 containerd[1462]: time="2025-01-29T12:14:27.451272105Z" level=error msg="encountered an error cleaning up failed sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.451428 containerd[1462]: time="2025-01-29T12:14:27.451341275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.451594 kubelet[1849]: E0129 12:14:27.451526 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.451594 kubelet[1849]: E0129 12:14:27.451588 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:27.451695 kubelet[1849]: E0129 12:14:27.451611 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:27.451695 kubelet[1849]: E0129 12:14:27.451658 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:27.461528 containerd[1462]: time="2025-01-29T12:14:27.461435928Z" level=error msg="Failed to destroy network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.461950 containerd[1462]: time="2025-01-29T12:14:27.461861369Z" level=error msg="encountered an error cleaning up failed sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.462042 containerd[1462]: time="2025-01-29T12:14:27.461915039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.462608 kubelet[1849]: E0129 12:14:27.462268 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:27.462608 kubelet[1849]: E0129 12:14:27.462325 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:27.462608 kubelet[1849]: E0129 12:14:27.462354 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:27.462745 kubelet[1849]: E0129 12:14:27.462410 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-qsdft" podUID="e20c74cd-bd32-47f3-b39b-858ca4b1cb78" Jan 29 12:14:27.958824 kubelet[1849]: E0129 12:14:27.958787 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:28.271567 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b-shm.mount: Deactivated successfully. Jan 29 12:14:28.322364 kubelet[1849]: I0129 12:14:28.321688 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922" Jan 29 12:14:28.322656 containerd[1462]: time="2025-01-29T12:14:28.322630450Z" level=info msg="StopPodSandbox for \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\"" Jan 29 12:14:28.323533 containerd[1462]: time="2025-01-29T12:14:28.323364542Z" level=info msg="Ensure that sandbox f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922 in task-service has been cleanup successfully" Jan 29 12:14:28.325039 containerd[1462]: time="2025-01-29T12:14:28.325016380Z" level=info msg="TearDown network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" successfully" Jan 29 12:14:28.325452 containerd[1462]: time="2025-01-29T12:14:28.325094948Z" level=info msg="StopPodSandbox for \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" returns successfully" Jan 29 12:14:28.326562 systemd[1]: run-netns-cni\x2d430f9278\x2ddc27\x2d8388\x2df4dc\x2d6ff46febe7f0.mount: Deactivated successfully. Jan 29 12:14:28.328133 containerd[1462]: time="2025-01-29T12:14:28.328101355Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\"" Jan 29 12:14:28.328270 containerd[1462]: time="2025-01-29T12:14:28.328198869Z" level=info msg="TearDown network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" successfully" Jan 29 12:14:28.328270 containerd[1462]: time="2025-01-29T12:14:28.328216182Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" returns successfully" Jan 29 12:14:28.329440 containerd[1462]: time="2025-01-29T12:14:28.329418885Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" Jan 29 12:14:28.329574 containerd[1462]: time="2025-01-29T12:14:28.329557807Z" level=info msg="TearDown network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" successfully" Jan 29 12:14:28.329648 containerd[1462]: time="2025-01-29T12:14:28.329634691Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" returns successfully" Jan 29 12:14:28.331602 containerd[1462]: time="2025-01-29T12:14:28.330774717Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:14:28.331602 containerd[1462]: time="2025-01-29T12:14:28.330840511Z" level=info msg="TearDown network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" successfully" Jan 29 12:14:28.331602 containerd[1462]: time="2025-01-29T12:14:28.330851672Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" returns successfully" Jan 29 12:14:28.332270 containerd[1462]: time="2025-01-29T12:14:28.332248761Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:14:28.332332 containerd[1462]: time="2025-01-29T12:14:28.332314575Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:14:28.332359 containerd[1462]: time="2025-01-29T12:14:28.332330615Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:14:28.332607 containerd[1462]: time="2025-01-29T12:14:28.332562933Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:14:28.332651 containerd[1462]: time="2025-01-29T12:14:28.332629588Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:14:28.332651 containerd[1462]: time="2025-01-29T12:14:28.332640469Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:14:28.332856 containerd[1462]: time="2025-01-29T12:14:28.332835385Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:28.332921 containerd[1462]: time="2025-01-29T12:14:28.332903364Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:28.332966 containerd[1462]: time="2025-01-29T12:14:28.332919684Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:28.334024 kubelet[1849]: I0129 12:14:28.333786 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b" Jan 29 12:14:28.334235 containerd[1462]: time="2025-01-29T12:14:28.334212747Z" level=info msg="StopPodSandbox for \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\"" Jan 29 12:14:28.334429 containerd[1462]: time="2025-01-29T12:14:28.334395601Z" level=info msg="Ensure that sandbox a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b in task-service has been cleanup successfully" Jan 29 12:14:28.336355 containerd[1462]: time="2025-01-29T12:14:28.335972790Z" level=info msg="TearDown network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" successfully" Jan 29 12:14:28.336355 containerd[1462]: time="2025-01-29T12:14:28.335988490Z" level=info msg="StopPodSandbox for \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" returns successfully" Jan 29 12:14:28.336355 containerd[1462]: time="2025-01-29T12:14:28.336047100Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:28.336355 containerd[1462]: time="2025-01-29T12:14:28.336103697Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:28.336355 containerd[1462]: time="2025-01-29T12:14:28.336113665Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:28.336026 systemd[1]: run-netns-cni\x2db751649c\x2df4ce\x2d7dbb\x2d0a2d\x2dd9c71adbb642.mount: Deactivated successfully. Jan 29 12:14:28.337629 containerd[1462]: time="2025-01-29T12:14:28.337604290Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\"" Jan 29 12:14:28.337682 containerd[1462]: time="2025-01-29T12:14:28.337672920Z" level=info msg="TearDown network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" successfully" Jan 29 12:14:28.337712 containerd[1462]: time="2025-01-29T12:14:28.337684702Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" returns successfully" Jan 29 12:14:28.337755 containerd[1462]: time="2025-01-29T12:14:28.337733373Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:28.337817 containerd[1462]: time="2025-01-29T12:14:28.337794819Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:28.337817 containerd[1462]: time="2025-01-29T12:14:28.337811631Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:28.339858 containerd[1462]: time="2025-01-29T12:14:28.339784574Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" Jan 29 12:14:28.340213 containerd[1462]: time="2025-01-29T12:14:28.339898518Z" level=info msg="TearDown network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" successfully" Jan 29 12:14:28.340976 containerd[1462]: time="2025-01-29T12:14:28.340947643Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" returns successfully" Jan 29 12:14:28.341822 containerd[1462]: time="2025-01-29T12:14:28.341780741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:9,}" Jan 29 12:14:28.353899 containerd[1462]: time="2025-01-29T12:14:28.353836069Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:14:28.354192 containerd[1462]: time="2025-01-29T12:14:28.353982635Z" level=info msg="TearDown network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" successfully" Jan 29 12:14:28.354192 containerd[1462]: time="2025-01-29T12:14:28.353999466Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" returns successfully" Jan 29 12:14:28.358511 containerd[1462]: time="2025-01-29T12:14:28.358486120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:4,}" Jan 29 12:14:28.758183 containerd[1462]: time="2025-01-29T12:14:28.758050851Z" level=error msg="Failed to destroy network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.758495 containerd[1462]: time="2025-01-29T12:14:28.758395019Z" level=error msg="encountered an error cleaning up failed sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.758495 containerd[1462]: time="2025-01-29T12:14:28.758455051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.759069 kubelet[1849]: E0129 12:14:28.758646 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.759069 kubelet[1849]: E0129 12:14:28.758699 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:28.759069 kubelet[1849]: E0129 12:14:28.758726 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-qsdft" Jan 29 12:14:28.759178 kubelet[1849]: E0129 12:14:28.758768 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-qsdft_default(e20c74cd-bd32-47f3-b39b-858ca4b1cb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-qsdft" podUID="e20c74cd-bd32-47f3-b39b-858ca4b1cb78" Jan 29 12:14:28.762412 containerd[1462]: time="2025-01-29T12:14:28.762382543Z" level=error msg="Failed to destroy network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.762648 containerd[1462]: time="2025-01-29T12:14:28.762622235Z" level=error msg="encountered an error cleaning up failed sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.762699 containerd[1462]: time="2025-01-29T12:14:28.762673511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.763085 kubelet[1849]: E0129 12:14:28.762855 1849 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:14:28.763085 kubelet[1849]: E0129 12:14:28.763034 1849 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:28.763085 kubelet[1849]: E0129 12:14:28.763052 1849 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w9pfc" Jan 29 12:14:28.763337 kubelet[1849]: E0129 12:14:28.763221 1849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w9pfc_calico-system(87fda5ff-7ae8-4a23-9f68-7722f3966cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w9pfc" podUID="87fda5ff-7ae8-4a23-9f68-7722f3966cdc" Jan 29 12:14:28.916038 containerd[1462]: time="2025-01-29T12:14:28.915828218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:28.917518 containerd[1462]: time="2025-01-29T12:14:28.917215569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:14:28.918676 containerd[1462]: time="2025-01-29T12:14:28.918629511Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:28.923093 containerd[1462]: time="2025-01-29T12:14:28.923007439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:28.924488 containerd[1462]: time="2025-01-29T12:14:28.923861517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.680248121s" Jan 29 12:14:28.924488 containerd[1462]: time="2025-01-29T12:14:28.923939554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:14:28.932739 containerd[1462]: time="2025-01-29T12:14:28.932690132Z" level=info msg="CreateContainer within sandbox \"926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:14:28.950410 containerd[1462]: time="2025-01-29T12:14:28.950336433Z" level=info msg="CreateContainer within sandbox \"926e6a30d924e7c83a2fd020f75ac7dfebe44fb900d0a11b68dd51ae4489ac08\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f16d3f6b007d405dd6156feef1fc4adaaa36df57f1a85ab1e36490ea77038689\"" Jan 29 12:14:28.951018 containerd[1462]: time="2025-01-29T12:14:28.950952101Z" level=info msg="StartContainer for \"f16d3f6b007d405dd6156feef1fc4adaaa36df57f1a85ab1e36490ea77038689\"" Jan 29 12:14:28.959922 kubelet[1849]: E0129 12:14:28.959835 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:28.980230 systemd[1]: Started cri-containerd-f16d3f6b007d405dd6156feef1fc4adaaa36df57f1a85ab1e36490ea77038689.scope - libcontainer container f16d3f6b007d405dd6156feef1fc4adaaa36df57f1a85ab1e36490ea77038689. Jan 29 12:14:29.019672 containerd[1462]: time="2025-01-29T12:14:29.018140974Z" level=info msg="StartContainer for \"f16d3f6b007d405dd6156feef1fc4adaaa36df57f1a85ab1e36490ea77038689\" returns successfully" Jan 29 12:14:29.097717 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:14:29.097799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:14:29.289508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8-shm.mount: Deactivated successfully. Jan 29 12:14:29.289759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9-shm.mount: Deactivated successfully. Jan 29 12:14:29.289918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966993829.mount: Deactivated successfully. Jan 29 12:14:29.352548 kubelet[1849]: I0129 12:14:29.352017 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9" Jan 29 12:14:29.352794 containerd[1462]: time="2025-01-29T12:14:29.352766594Z" level=info msg="StopPodSandbox for \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\"" Jan 29 12:14:29.353389 containerd[1462]: time="2025-01-29T12:14:29.353160625Z" level=info msg="Ensure that sandbox 2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9 in task-service has been cleanup successfully" Jan 29 12:14:29.354995 containerd[1462]: time="2025-01-29T12:14:29.354964970Z" level=info msg="TearDown network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\" successfully" Jan 29 12:14:29.355139 containerd[1462]: time="2025-01-29T12:14:29.355123648Z" level=info msg="StopPodSandbox for \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\" returns successfully" Jan 29 12:14:29.357689 systemd[1]: run-netns-cni\x2d31dbf402\x2d4061\x2dcca0\x2d2b92\x2ddc54266dec5d.mount: Deactivated successfully. Jan 29 12:14:29.359702 containerd[1462]: time="2025-01-29T12:14:29.359678439Z" level=info msg="StopPodSandbox for \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\"" Jan 29 12:14:29.359866 containerd[1462]: time="2025-01-29T12:14:29.359849180Z" level=info msg="TearDown network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" successfully" Jan 29 12:14:29.360251 containerd[1462]: time="2025-01-29T12:14:29.360234536Z" level=info msg="StopPodSandbox for \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" returns successfully" Jan 29 12:14:29.361395 kubelet[1849]: I0129 12:14:29.361374 1849 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8" Jan 29 12:14:29.361857 containerd[1462]: time="2025-01-29T12:14:29.361795973Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\"" Jan 29 12:14:29.362691 containerd[1462]: time="2025-01-29T12:14:29.362157834Z" level=info msg="TearDown network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" successfully" Jan 29 12:14:29.362691 containerd[1462]: time="2025-01-29T12:14:29.362174525Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" returns successfully" Jan 29 12:14:29.362691 containerd[1462]: time="2025-01-29T12:14:29.362276778Z" level=info msg="StopPodSandbox for \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\"" Jan 29 12:14:29.362691 containerd[1462]: time="2025-01-29T12:14:29.362517521Z" level=info msg="Ensure that sandbox 4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8 in task-service has been cleanup successfully" Jan 29 12:14:29.364022 containerd[1462]: time="2025-01-29T12:14:29.364000402Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" Jan 29 12:14:29.364243 containerd[1462]: time="2025-01-29T12:14:29.364223651Z" level=info msg="TearDown network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" successfully" Jan 29 12:14:29.364313 containerd[1462]: time="2025-01-29T12:14:29.364298301Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" returns successfully" Jan 29 12:14:29.364645 containerd[1462]: time="2025-01-29T12:14:29.364624836Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:14:29.364793 containerd[1462]: time="2025-01-29T12:14:29.364774878Z" level=info msg="TearDown network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" successfully" Jan 29 12:14:29.365411 containerd[1462]: time="2025-01-29T12:14:29.365183978Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" returns successfully" Jan 29 12:14:29.368004 containerd[1462]: time="2025-01-29T12:14:29.366198727Z" level=info msg="TearDown network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\" successfully" Jan 29 12:14:29.368107 containerd[1462]: time="2025-01-29T12:14:29.368087041Z" level=info msg="StopPodSandbox for \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\" returns successfully" Jan 29 12:14:29.368871 systemd[1]: run-netns-cni\x2ddf46a0c3\x2d57c1\x2dbf9e\x2dab85\x2d8017b43ee20c.mount: Deactivated successfully. Jan 29 12:14:29.370017 containerd[1462]: time="2025-01-29T12:14:29.369641956Z" level=info msg="StopPodSandbox for \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\"" Jan 29 12:14:29.370017 containerd[1462]: time="2025-01-29T12:14:29.369715505Z" level=info msg="TearDown network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" successfully" Jan 29 12:14:29.370017 containerd[1462]: time="2025-01-29T12:14:29.369727017Z" level=info msg="StopPodSandbox for \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" returns successfully" Jan 29 12:14:29.373000 containerd[1462]: time="2025-01-29T12:14:29.372753972Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:14:29.373000 containerd[1462]: time="2025-01-29T12:14:29.372831809Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:14:29.373000 containerd[1462]: time="2025-01-29T12:14:29.372843591Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:14:29.373533 containerd[1462]: time="2025-01-29T12:14:29.373265945Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\"" Jan 29 12:14:29.373533 containerd[1462]: time="2025-01-29T12:14:29.373334625Z" level=info msg="TearDown network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" successfully" Jan 29 12:14:29.373533 containerd[1462]: time="2025-01-29T12:14:29.373345966Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" returns successfully" Jan 29 12:14:29.374525 containerd[1462]: time="2025-01-29T12:14:29.374504085Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:14:29.375019 containerd[1462]: time="2025-01-29T12:14:29.374756720Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:14:29.375019 containerd[1462]: time="2025-01-29T12:14:29.374774363Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:14:29.375019 containerd[1462]: time="2025-01-29T12:14:29.374671290Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" Jan 29 12:14:29.375019 containerd[1462]: time="2025-01-29T12:14:29.374856258Z" level=info msg="TearDown network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" successfully" Jan 29 12:14:29.375019 containerd[1462]: time="2025-01-29T12:14:29.374867158Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" returns successfully" Jan 29 12:14:29.375748 containerd[1462]: time="2025-01-29T12:14:29.375509547Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:14:29.375748 containerd[1462]: time="2025-01-29T12:14:29.375584397Z" level=info msg="TearDown network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" successfully" Jan 29 12:14:29.375748 containerd[1462]: time="2025-01-29T12:14:29.375595638Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" returns successfully" Jan 29 12:14:29.375748 containerd[1462]: time="2025-01-29T12:14:29.375652716Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:14:29.375748 containerd[1462]: time="2025-01-29T12:14:29.375710535Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:14:29.375748 containerd[1462]: time="2025-01-29T12:14:29.375720433Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:14:29.376652 containerd[1462]: time="2025-01-29T12:14:29.376635245Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:14:29.377450 containerd[1462]: time="2025-01-29T12:14:29.377246806Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:14:29.377733 containerd[1462]: time="2025-01-29T12:14:29.377709055Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:14:29.378786 containerd[1462]: time="2025-01-29T12:14:29.378641670Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:14:29.378786 containerd[1462]: time="2025-01-29T12:14:29.378708976Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:14:29.378786 containerd[1462]: time="2025-01-29T12:14:29.378720890Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:14:29.379062 containerd[1462]: time="2025-01-29T12:14:29.379042795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:5,}" Jan 29 12:14:29.379599 containerd[1462]: time="2025-01-29T12:14:29.379580216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:10,}" Jan 29 12:14:29.631597 systemd-networkd[1373]: cali2583a72767d: Link UP Jan 29 12:14:29.633605 systemd-networkd[1373]: cali2583a72767d: Gained carrier Jan 29 12:14:29.643128 kubelet[1849]: I0129 12:14:29.643082 1849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bhbqj" podStartSLOduration=3.193118681 podStartE2EDuration="24.643064878s" podCreationTimestamp="2025-01-29 12:14:05 +0000 UTC" firstStartedPulling="2025-01-29 12:14:07.474809263 +0000 UTC m=+3.243851259" lastFinishedPulling="2025-01-29 12:14:28.92475547 +0000 UTC m=+24.693797456" observedRunningTime="2025-01-29 12:14:29.381220463 +0000 UTC m=+25.150262459" watchObservedRunningTime="2025-01-29 12:14:29.643064878 +0000 UTC m=+25.412106864" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.457 [INFO][2877] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.487 [INFO][2877] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0 nginx-deployment-7fcdb87857- default e20c74cd-bd32-47f3-b39b-858ca4b1cb78 1156 0 2025-01-29 12:14:23 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.137 nginx-deployment-7fcdb87857-qsdft eth0 default [] [] [kns.default ksa.default.default] cali2583a72767d [] []}} ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.487 [INFO][2877] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.558 [INFO][2903] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" HandleID="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Workload="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.574 [INFO][2903] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" HandleID="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Workload="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ac9d0), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.137", "pod":"nginx-deployment-7fcdb87857-qsdft", "timestamp":"2025-01-29 12:14:29.558838227 +0000 UTC"}, Hostname:"172.24.4.137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.574 [INFO][2903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.574 [INFO][2903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.575 [INFO][2903] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.137' Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.577 [INFO][2903] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.586 [INFO][2903] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.593 [INFO][2903] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.595 [INFO][2903] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.598 [INFO][2903] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.599 [INFO][2903] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.601 [INFO][2903] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.612 [INFO][2903] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.620 [INFO][2903] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.65/26] block=192.168.66.64/26 handle="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.620 [INFO][2903] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.65/26] handle="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" host="172.24.4.137" Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.620 [INFO][2903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:14:29.644964 containerd[1462]: 2025-01-29 12:14:29.620 [INFO][2903] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.65/26] IPv6=[] ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" HandleID="k8s-pod-network.b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Workload="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" Jan 29 12:14:29.645549 containerd[1462]: 2025-01-29 12:14:29.622 [INFO][2877] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e20c74cd-bd32-47f3-b39b-858ca4b1cb78", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-qsdft", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2583a72767d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:14:29.645549 containerd[1462]: 2025-01-29 12:14:29.622 [INFO][2877] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.65/32] ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" Jan 29 12:14:29.645549 containerd[1462]: 2025-01-29 12:14:29.622 [INFO][2877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2583a72767d ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" Jan 29 12:14:29.645549 containerd[1462]: 2025-01-29 12:14:29.632 [INFO][2877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" Jan 29 12:14:29.645549 containerd[1462]: 2025-01-29 12:14:29.633 [INFO][2877] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e20c74cd-bd32-47f3-b39b-858ca4b1cb78", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a", Pod:"nginx-deployment-7fcdb87857-qsdft", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2583a72767d", MAC:"4a:84:bd:c7:d8:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:14:29.645549 containerd[1462]: 2025-01-29 12:14:29.642 [INFO][2877] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a" Namespace="default" Pod="nginx-deployment-7fcdb87857-qsdft" WorkloadEndpoint="172.24.4.137-k8s-nginx--deployment--7fcdb87857--qsdft-eth0" Jan 29 12:14:29.677816 containerd[1462]: time="2025-01-29T12:14:29.677569952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:14:29.677816 containerd[1462]: time="2025-01-29T12:14:29.677654381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:14:29.677816 containerd[1462]: time="2025-01-29T12:14:29.677671824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:29.677816 containerd[1462]: time="2025-01-29T12:14:29.677762455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:29.699092 systemd[1]: Started cri-containerd-b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a.scope - libcontainer container b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a. Jan 29 12:14:29.729426 systemd-networkd[1373]: cali0f2722c660e: Link UP Jan 29 12:14:29.729623 systemd-networkd[1373]: cali0f2722c660e: Gained carrier Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.460 [INFO][2880] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.494 [INFO][2880] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.137-k8s-csi--node--driver--w9pfc-eth0 csi-node-driver- calico-system 87fda5ff-7ae8-4a23-9f68-7722f3966cdc 1053 0 2025-01-29 12:14:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.137 csi-node-driver-w9pfc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f2722c660e [] []}} ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.494 [INFO][2880] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.578 [INFO][2907] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" HandleID="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Workload="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.589 [INFO][2907] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" HandleID="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Workload="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392c80), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.137", "pod":"csi-node-driver-w9pfc", "timestamp":"2025-01-29 12:14:29.578506765 +0000 UTC"}, Hostname:"172.24.4.137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.589 [INFO][2907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.620 [INFO][2907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.621 [INFO][2907] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.137' Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.679 [INFO][2907] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.689 [INFO][2907] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.695 [INFO][2907] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.698 [INFO][2907] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.703 [INFO][2907] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.703 [INFO][2907] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.707 [INFO][2907] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059 Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.714 [INFO][2907] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.722 [INFO][2907] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.66/26] block=192.168.66.64/26 handle="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.722 [INFO][2907] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.66/26] handle="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" host="172.24.4.137" Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.722 [INFO][2907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:14:29.748908 containerd[1462]: 2025-01-29 12:14:29.722 [INFO][2907] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.66/26] IPv6=[] ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" HandleID="k8s-pod-network.54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Workload="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" Jan 29 12:14:29.749818 containerd[1462]: 2025-01-29 12:14:29.724 [INFO][2880] cni-plugin/k8s.go 386: Populated endpoint ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-csi--node--driver--w9pfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87fda5ff-7ae8-4a23-9f68-7722f3966cdc", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"", Pod:"csi-node-driver-w9pfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f2722c660e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:14:29.749818 containerd[1462]: 2025-01-29 12:14:29.724 [INFO][2880] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.66/32] ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" Jan 29 12:14:29.749818 containerd[1462]: 2025-01-29 12:14:29.724 [INFO][2880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f2722c660e ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" Jan 29 12:14:29.749818 containerd[1462]: 2025-01-29 12:14:29.729 [INFO][2880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" Jan 29 12:14:29.749818 containerd[1462]: 2025-01-29 12:14:29.729 [INFO][2880] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-csi--node--driver--w9pfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87fda5ff-7ae8-4a23-9f68-7722f3966cdc", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059", Pod:"csi-node-driver-w9pfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f2722c660e", MAC:"ce:55:3f:e9:63:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:14:29.749818 containerd[1462]: 2025-01-29 12:14:29.747 [INFO][2880] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059" Namespace="calico-system" Pod="csi-node-driver-w9pfc" WorkloadEndpoint="172.24.4.137-k8s-csi--node--driver--w9pfc-eth0" Jan 29 12:14:29.753808 containerd[1462]: time="2025-01-29T12:14:29.753771302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qsdft,Uid:e20c74cd-bd32-47f3-b39b-858ca4b1cb78,Namespace:default,Attempt:5,} returns sandbox id \"b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a\"" Jan 29 12:14:29.755354 containerd[1462]: time="2025-01-29T12:14:29.755328592Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:14:29.773641 containerd[1462]: time="2025-01-29T12:14:29.773514871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:14:29.773641 containerd[1462]: time="2025-01-29T12:14:29.773591315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:14:29.773884 containerd[1462]: time="2025-01-29T12:14:29.773606304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:29.773884 containerd[1462]: time="2025-01-29T12:14:29.773789047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:29.794060 systemd[1]: Started cri-containerd-54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059.scope - libcontainer container 54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059. Jan 29 12:14:29.814887 containerd[1462]: time="2025-01-29T12:14:29.814855628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9pfc,Uid:87fda5ff-7ae8-4a23-9f68-7722f3966cdc,Namespace:calico-system,Attempt:10,} returns sandbox id \"54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059\"" Jan 29 12:14:29.961375 kubelet[1849]: E0129 12:14:29.960412 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:30.880998 kernel: bpftool[3167]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:14:30.961345 kubelet[1849]: E0129 12:14:30.961229 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:31.187384 systemd-networkd[1373]: vxlan.calico: Link UP Jan 29 12:14:31.187395 systemd-networkd[1373]: vxlan.calico: Gained carrier Jan 29 12:14:31.349372 systemd-networkd[1373]: cali2583a72767d: Gained IPv6LL Jan 29 12:14:31.603052 systemd-networkd[1373]: cali0f2722c660e: Gained IPv6LL Jan 29 12:14:31.962348 kubelet[1849]: E0129 12:14:31.961996 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:32.627255 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Jan 29 12:14:32.964550 kubelet[1849]: E0129 12:14:32.964022 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:33.964829 kubelet[1849]: E0129 12:14:33.964790 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:34.040696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491865722.mount: Deactivated successfully. Jan 29 12:14:34.965389 kubelet[1849]: E0129 12:14:34.965289 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:35.302013 containerd[1462]: time="2025-01-29T12:14:35.301742335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:35.304495 containerd[1462]: time="2025-01-29T12:14:35.304421751Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 12:14:35.305192 containerd[1462]: time="2025-01-29T12:14:35.305008053Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:35.309459 containerd[1462]: time="2025-01-29T12:14:35.309358149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:35.310616 containerd[1462]: time="2025-01-29T12:14:35.310467625Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.555107855s" Jan 29 12:14:35.310616 containerd[1462]: time="2025-01-29T12:14:35.310509634Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 12:14:35.314668 containerd[1462]: time="2025-01-29T12:14:35.314200682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:14:35.314906 containerd[1462]: time="2025-01-29T12:14:35.314817602Z" level=info msg="CreateContainer within sandbox \"b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 12:14:35.346104 containerd[1462]: time="2025-01-29T12:14:35.345824914Z" level=info msg="CreateContainer within sandbox \"b2fba7c84f9dbccfad84f4289eb701a59ca799caee405c9a51f276536460851a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"06a005db39b36cd8113b7ff317e7ca4dc4f725299f11c4979955170f3ddcce8e\"" Jan 29 12:14:35.346884 containerd[1462]: time="2025-01-29T12:14:35.346707142Z" level=info msg="StartContainer for \"06a005db39b36cd8113b7ff317e7ca4dc4f725299f11c4979955170f3ddcce8e\"" Jan 29 12:14:35.395167 systemd[1]: Started cri-containerd-06a005db39b36cd8113b7ff317e7ca4dc4f725299f11c4979955170f3ddcce8e.scope - libcontainer container 06a005db39b36cd8113b7ff317e7ca4dc4f725299f11c4979955170f3ddcce8e. Jan 29 12:14:35.442581 containerd[1462]: time="2025-01-29T12:14:35.442441508Z" level=info msg="StartContainer for \"06a005db39b36cd8113b7ff317e7ca4dc4f725299f11c4979955170f3ddcce8e\" returns successfully" Jan 29 12:14:35.965693 kubelet[1849]: E0129 12:14:35.965603 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:36.966099 kubelet[1849]: E0129 12:14:36.966069 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:36.986995 containerd[1462]: time="2025-01-29T12:14:36.986952525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:36.988197 containerd[1462]: time="2025-01-29T12:14:36.988151457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:14:36.989315 containerd[1462]: time="2025-01-29T12:14:36.989270441Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:36.991714 containerd[1462]: time="2025-01-29T12:14:36.991685048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:36.992523 containerd[1462]: time="2025-01-29T12:14:36.992419990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.678186688s" Jan 29 12:14:36.992523 containerd[1462]: time="2025-01-29T12:14:36.992448844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:14:36.994476 containerd[1462]: time="2025-01-29T12:14:36.994336742Z" level=info msg="CreateContainer within sandbox \"54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:14:37.018672 containerd[1462]: time="2025-01-29T12:14:37.018636737Z" level=info msg="CreateContainer within sandbox \"54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f7026e4e57760bf62aeefc3a8aaf3e5775df43f314af3e59542f1f0c0a4f1a9a\"" Jan 29 12:14:37.021891 containerd[1462]: time="2025-01-29T12:14:37.021590777Z" level=info msg="StartContainer for \"f7026e4e57760bf62aeefc3a8aaf3e5775df43f314af3e59542f1f0c0a4f1a9a\"" Jan 29 12:14:37.052090 systemd[1]: Started cri-containerd-f7026e4e57760bf62aeefc3a8aaf3e5775df43f314af3e59542f1f0c0a4f1a9a.scope - libcontainer container f7026e4e57760bf62aeefc3a8aaf3e5775df43f314af3e59542f1f0c0a4f1a9a. Jan 29 12:14:37.086908 containerd[1462]: time="2025-01-29T12:14:37.086854166Z" level=info msg="StartContainer for \"f7026e4e57760bf62aeefc3a8aaf3e5775df43f314af3e59542f1f0c0a4f1a9a\" returns successfully" Jan 29 12:14:37.088103 containerd[1462]: time="2025-01-29T12:14:37.088069120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:14:37.967151 kubelet[1849]: E0129 12:14:37.967067 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:38.968290 kubelet[1849]: E0129 12:14:38.968176 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:39.070946 containerd[1462]: time="2025-01-29T12:14:39.070025689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:39.073007 containerd[1462]: time="2025-01-29T12:14:39.072962827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:14:39.074697 containerd[1462]: time="2025-01-29T12:14:39.073471853Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:39.077163 containerd[1462]: time="2025-01-29T12:14:39.077128694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:39.077950 containerd[1462]: time="2025-01-29T12:14:39.077898790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.989796067s" Jan 29 12:14:39.078040 containerd[1462]: time="2025-01-29T12:14:39.078021049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:14:39.080131 containerd[1462]: time="2025-01-29T12:14:39.079996421Z" level=info msg="CreateContainer within sandbox \"54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:14:39.102180 containerd[1462]: time="2025-01-29T12:14:39.102067816Z" level=info msg="CreateContainer within sandbox \"54d26541635bc3e8b9d82dc8d16353f4b306fef310ea306603f8b9f3fc105059\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b7b03e107188152a2f4d48fd99704afb2ba5fc4e2c30ad41e66b5a20f2dda96\"" Jan 29 12:14:39.102681 containerd[1462]: time="2025-01-29T12:14:39.102617839Z" level=info msg="StartContainer for \"3b7b03e107188152a2f4d48fd99704afb2ba5fc4e2c30ad41e66b5a20f2dda96\"" Jan 29 12:14:39.134101 systemd[1]: Started cri-containerd-3b7b03e107188152a2f4d48fd99704afb2ba5fc4e2c30ad41e66b5a20f2dda96.scope - libcontainer container 3b7b03e107188152a2f4d48fd99704afb2ba5fc4e2c30ad41e66b5a20f2dda96. Jan 29 12:14:39.167096 containerd[1462]: time="2025-01-29T12:14:39.167058705Z" level=info msg="StartContainer for \"3b7b03e107188152a2f4d48fd99704afb2ba5fc4e2c30ad41e66b5a20f2dda96\" returns successfully" Jan 29 12:14:39.534781 kubelet[1849]: I0129 12:14:39.534616 1849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-qsdft" podStartSLOduration=10.975777446 podStartE2EDuration="16.534583783s" podCreationTimestamp="2025-01-29 12:14:23 +0000 UTC" firstStartedPulling="2025-01-29 12:14:29.754828391 +0000 UTC m=+25.523870377" lastFinishedPulling="2025-01-29 12:14:35.313634718 +0000 UTC m=+31.082676714" observedRunningTime="2025-01-29 12:14:35.467104445 +0000 UTC m=+31.236146441" watchObservedRunningTime="2025-01-29 12:14:39.534583783 +0000 UTC m=+35.303625819" Jan 29 12:14:39.968568 kubelet[1849]: E0129 12:14:39.968432 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:40.069049 kubelet[1849]: I0129 12:14:40.068997 1849 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:14:40.069049 kubelet[1849]: I0129 12:14:40.069056 1849 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:14:40.968744 kubelet[1849]: E0129 12:14:40.968655 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:41.969859 kubelet[1849]: E0129 12:14:41.969558 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:42.970298 kubelet[1849]: E0129 12:14:42.970212 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:43.970744 kubelet[1849]: E0129 12:14:43.970624 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:44.929551 kubelet[1849]: E0129 12:14:44.929465 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:44.970990 kubelet[1849]: E0129 12:14:44.970881 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:45.050039 kubelet[1849]: I0129 12:14:45.049179 1849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w9pfc" podStartSLOduration=30.78688115 podStartE2EDuration="40.049147548s" podCreationTimestamp="2025-01-29 12:14:05 +0000 UTC" firstStartedPulling="2025-01-29 12:14:29.816362594 +0000 UTC m=+25.585404580" lastFinishedPulling="2025-01-29 12:14:39.078628981 +0000 UTC m=+34.847670978" observedRunningTime="2025-01-29 12:14:39.535322721 +0000 UTC m=+35.304364757" watchObservedRunningTime="2025-01-29 12:14:45.049147548 +0000 UTC m=+40.818189584" Jan 29 12:14:45.066874 systemd[1]: Created slice kubepods-besteffort-pod5fc353fd_214a_4449_be88_d9156f5e9f1b.slice - libcontainer container kubepods-besteffort-pod5fc353fd_214a_4449_be88_d9156f5e9f1b.slice. Jan 29 12:14:45.139958 kubelet[1849]: I0129 12:14:45.139817 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5fc353fd-214a-4449-be88-d9156f5e9f1b-data\") pod \"nfs-server-provisioner-0\" (UID: \"5fc353fd-214a-4449-be88-d9156f5e9f1b\") " pod="default/nfs-server-provisioner-0" Jan 29 12:14:45.139958 kubelet[1849]: I0129 12:14:45.139899 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcqx\" (UniqueName: \"kubernetes.io/projected/5fc353fd-214a-4449-be88-d9156f5e9f1b-kube-api-access-lhcqx\") pod \"nfs-server-provisioner-0\" (UID: \"5fc353fd-214a-4449-be88-d9156f5e9f1b\") " pod="default/nfs-server-provisioner-0" Jan 29 12:14:45.374638 containerd[1462]: time="2025-01-29T12:14:45.374550368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5fc353fd-214a-4449-be88-d9156f5e9f1b,Namespace:default,Attempt:0,}" Jan 29 12:14:45.632544 systemd-networkd[1373]: cali60e51b789ff: Link UP Jan 29 12:14:45.634493 systemd-networkd[1373]: cali60e51b789ff: Gained carrier Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.493 [INFO][3428] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.137-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5fc353fd-214a-4449-be88-d9156f5e9f1b 1277 0 2025-01-29 12:14:45 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.137 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.493 [INFO][3428] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.529 [INFO][3438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" HandleID="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Workload="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.553 [INFO][3438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" HandleID="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Workload="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc580), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.137", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 12:14:45.529574951 +0000 UTC"}, Hostname:"172.24.4.137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.553 [INFO][3438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.553 [INFO][3438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.553 [INFO][3438] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.137' Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.557 [INFO][3438] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.571 [INFO][3438] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.586 [INFO][3438] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.591 [INFO][3438] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.597 [INFO][3438] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.597 [INFO][3438] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.601 [INFO][3438] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024 Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.612 [INFO][3438] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.624 [INFO][3438] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.67/26] block=192.168.66.64/26 handle="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.624 [INFO][3438] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.67/26] handle="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" host="172.24.4.137" Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.624 [INFO][3438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:14:45.658651 containerd[1462]: 2025-01-29 12:14:45.624 [INFO][3438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.67/26] IPv6=[] ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" HandleID="k8s-pod-network.7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Workload="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:14:45.662266 containerd[1462]: 2025-01-29 12:14:45.627 [INFO][3428] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5fc353fd-214a-4449-be88-d9156f5e9f1b", ResourceVersion:"1277", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:14:45.662266 containerd[1462]: 2025-01-29 12:14:45.628 [INFO][3428] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.67/32] ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:14:45.662266 containerd[1462]: 2025-01-29 12:14:45.628 [INFO][3428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:14:45.662266 containerd[1462]: 2025-01-29 12:14:45.633 [INFO][3428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:14:45.662743 containerd[1462]: 2025-01-29 12:14:45.634 [INFO][3428] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5fc353fd-214a-4449-be88-d9156f5e9f1b", ResourceVersion:"1277", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"92:d3:2c:83:78:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:14:45.662743 containerd[1462]: 2025-01-29 12:14:45.649 [INFO][3428] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.137-k8s-nfs--server--provisioner--0-eth0" Jan 29 12:14:45.703399 containerd[1462]: time="2025-01-29T12:14:45.702713529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:14:45.703399 containerd[1462]: time="2025-01-29T12:14:45.702797998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:14:45.703399 containerd[1462]: time="2025-01-29T12:14:45.702820349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:45.703399 containerd[1462]: time="2025-01-29T12:14:45.702906541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:14:45.732100 systemd[1]: Started cri-containerd-7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024.scope - libcontainer container 7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024. Jan 29 12:14:45.775504 containerd[1462]: time="2025-01-29T12:14:45.775291707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5fc353fd-214a-4449-be88-d9156f5e9f1b,Namespace:default,Attempt:0,} returns sandbox id \"7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024\"" Jan 29 12:14:45.777785 containerd[1462]: time="2025-01-29T12:14:45.777670654Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 12:14:45.972138 kubelet[1849]: E0129 12:14:45.971676 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:46.972462 kubelet[1849]: E0129 12:14:46.972393 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:47.220687 systemd-networkd[1373]: cali60e51b789ff: Gained IPv6LL Jan 29 12:14:47.973341 kubelet[1849]: E0129 12:14:47.973309 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:48.974541 kubelet[1849]: E0129 12:14:48.974478 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:49.010524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585459240.mount: Deactivated successfully. Jan 29 12:14:49.975384 kubelet[1849]: E0129 12:14:49.975336 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:50.976372 kubelet[1849]: E0129 12:14:50.976317 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:51.276157 containerd[1462]: time="2025-01-29T12:14:51.275957266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:51.278282 containerd[1462]: time="2025-01-29T12:14:51.278222788Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 12:14:51.280547 containerd[1462]: time="2025-01-29T12:14:51.280495054Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:51.284846 containerd[1462]: time="2025-01-29T12:14:51.284772382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:51.286114 containerd[1462]: time="2025-01-29T12:14:51.285981322Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.50822658s" Jan 29 12:14:51.286114 containerd[1462]: time="2025-01-29T12:14:51.286010526Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 12:14:51.291946 containerd[1462]: time="2025-01-29T12:14:51.290723352Z" level=info msg="CreateContainer within sandbox \"7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 12:14:51.313512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510747947.mount: Deactivated successfully. Jan 29 12:14:51.326738 containerd[1462]: time="2025-01-29T12:14:51.326697134Z" level=info msg="CreateContainer within sandbox \"7961245f828a30517a5566b11599c883b655b9432fdfdbf35caba39341725024\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ed96d0f5352505549e4f54e0d6aaf5a1b96d469196ac6bf876deb8898c17c74a\"" Jan 29 12:14:51.327500 containerd[1462]: time="2025-01-29T12:14:51.327421544Z" level=info msg="StartContainer for \"ed96d0f5352505549e4f54e0d6aaf5a1b96d469196ac6bf876deb8898c17c74a\"" Jan 29 12:14:51.353551 systemd[1]: run-containerd-runc-k8s.io-ed96d0f5352505549e4f54e0d6aaf5a1b96d469196ac6bf876deb8898c17c74a-runc.yAowKP.mount: Deactivated successfully. Jan 29 12:14:51.363072 systemd[1]: Started cri-containerd-ed96d0f5352505549e4f54e0d6aaf5a1b96d469196ac6bf876deb8898c17c74a.scope - libcontainer container ed96d0f5352505549e4f54e0d6aaf5a1b96d469196ac6bf876deb8898c17c74a. Jan 29 12:14:51.397157 containerd[1462]: time="2025-01-29T12:14:51.397048171Z" level=info msg="StartContainer for \"ed96d0f5352505549e4f54e0d6aaf5a1b96d469196ac6bf876deb8898c17c74a\" returns successfully" Jan 29 12:14:51.713718 kubelet[1849]: I0129 12:14:51.712828 1849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.2025942330000001 podStartE2EDuration="6.712794088s" podCreationTimestamp="2025-01-29 12:14:45 +0000 UTC" firstStartedPulling="2025-01-29 12:14:45.77704057 +0000 UTC m=+41.546082606" lastFinishedPulling="2025-01-29 12:14:51.287240475 +0000 UTC m=+47.056282461" observedRunningTime="2025-01-29 12:14:51.712730458 +0000 UTC m=+47.481772504" watchObservedRunningTime="2025-01-29 12:14:51.712794088 +0000 UTC m=+47.481836124" Jan 29 12:14:51.977540 kubelet[1849]: E0129 12:14:51.977306 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:52.978559 kubelet[1849]: E0129 12:14:52.978447 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:53.978885 kubelet[1849]: E0129 12:14:53.978819 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:54.980060 kubelet[1849]: E0129 12:14:54.979964 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:55.980808 kubelet[1849]: E0129 12:14:55.980721 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:56.981744 kubelet[1849]: E0129 12:14:56.981664 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:57.982399 kubelet[1849]: E0129 12:14:57.982320 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:58.982986 kubelet[1849]: E0129 12:14:58.982847 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:14:59.983192 kubelet[1849]: E0129 12:14:59.983116 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:00.983572 kubelet[1849]: E0129 12:15:00.983420 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:01.983762 kubelet[1849]: E0129 12:15:01.983683 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:02.984709 kubelet[1849]: E0129 12:15:02.984579 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:03.985662 kubelet[1849]: E0129 12:15:03.985578 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:04.930635 kubelet[1849]: E0129 12:15:04.930546 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:04.981173 containerd[1462]: time="2025-01-29T12:15:04.981017750Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:15:04.981826 containerd[1462]: time="2025-01-29T12:15:04.981236791Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:15:04.981826 containerd[1462]: time="2025-01-29T12:15:04.981267889Z" level=info msg="StopPodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:15:04.983085 containerd[1462]: time="2025-01-29T12:15:04.982663728Z" level=info msg="RemovePodSandbox for \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:15:04.983085 containerd[1462]: time="2025-01-29T12:15:04.982849326Z" level=info msg="Forcibly stopping sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\"" Jan 29 12:15:04.983954 containerd[1462]: time="2025-01-29T12:15:04.983493265Z" level=info msg="TearDown network for sandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" successfully" Jan 29 12:15:04.986536 kubelet[1849]: E0129 12:15:04.986459 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:04.991004 containerd[1462]: time="2025-01-29T12:15:04.989494143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:04.991004 containerd[1462]: time="2025-01-29T12:15:04.989590053Z" level=info msg="RemovePodSandbox \"307e9332c846a8c79a7cc21f699cb13b066724273345f2f35d483da4b2aa4336\" returns successfully" Jan 29 12:15:04.991004 containerd[1462]: time="2025-01-29T12:15:04.990487306Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:15:04.991004 containerd[1462]: time="2025-01-29T12:15:04.990660291Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:15:04.991004 containerd[1462]: time="2025-01-29T12:15:04.990689806Z" level=info msg="StopPodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:15:04.991721 containerd[1462]: time="2025-01-29T12:15:04.991675857Z" level=info msg="RemovePodSandbox for \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:15:04.992251 containerd[1462]: time="2025-01-29T12:15:04.992079784Z" level=info msg="Forcibly stopping sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\"" Jan 29 12:15:04.992907 containerd[1462]: time="2025-01-29T12:15:04.992595511Z" level=info msg="TearDown network for sandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" successfully" Jan 29 12:15:04.998254 containerd[1462]: time="2025-01-29T12:15:04.998190749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:04.998669 containerd[1462]: time="2025-01-29T12:15:04.998267694Z" level=info msg="RemovePodSandbox \"ea0789d07799a504f91e5e112abd42af6318b2dfd2b51fc0289034890c6e7881\" returns successfully" Jan 29 12:15:04.999308 containerd[1462]: time="2025-01-29T12:15:04.999211074Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:15:04.999457 containerd[1462]: time="2025-01-29T12:15:04.999387395Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:15:04.999457 containerd[1462]: time="2025-01-29T12:15:04.999417882Z" level=info msg="StopPodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:15:05.002165 containerd[1462]: time="2025-01-29T12:15:05.000244492Z" level=info msg="RemovePodSandbox for \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:15:05.002165 containerd[1462]: time="2025-01-29T12:15:05.000296129Z" level=info msg="Forcibly stopping sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\"" Jan 29 12:15:05.002165 containerd[1462]: time="2025-01-29T12:15:05.000422997Z" level=info msg="TearDown network for sandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" successfully" Jan 29 12:15:05.015675 containerd[1462]: time="2025-01-29T12:15:05.015589851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.016121 containerd[1462]: time="2025-01-29T12:15:05.016041229Z" level=info msg="RemovePodSandbox \"1be7031a2f96d94fb7a8e001ff14a817f6bc495cc69c66ce293c7a70f6eda6bc\" returns successfully" Jan 29 12:15:05.018216 containerd[1462]: time="2025-01-29T12:15:05.018134756Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:15:05.018762 containerd[1462]: time="2025-01-29T12:15:05.018603175Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:15:05.019004 containerd[1462]: time="2025-01-29T12:15:05.018906914Z" level=info msg="StopPodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:15:05.020001 containerd[1462]: time="2025-01-29T12:15:05.019918402Z" level=info msg="RemovePodSandbox for \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:15:05.020147 containerd[1462]: time="2025-01-29T12:15:05.020009143Z" level=info msg="Forcibly stopping sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\"" Jan 29 12:15:05.020301 containerd[1462]: time="2025-01-29T12:15:05.020215319Z" level=info msg="TearDown network for sandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" successfully" Jan 29 12:15:05.024862 containerd[1462]: time="2025-01-29T12:15:05.024756018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.025133 containerd[1462]: time="2025-01-29T12:15:05.024862107Z" level=info msg="RemovePodSandbox \"6f5a65e32d90afc2b269b652129dd73c20dd12e01ba4eb5e25bf5a4a814b8494\" returns successfully" Jan 29 12:15:05.026445 containerd[1462]: time="2025-01-29T12:15:05.026019459Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:15:05.026445 containerd[1462]: time="2025-01-29T12:15:05.026197884Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:15:05.026445 containerd[1462]: time="2025-01-29T12:15:05.026225676Z" level=info msg="StopPodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:15:05.027460 containerd[1462]: time="2025-01-29T12:15:05.027223036Z" level=info msg="RemovePodSandbox for \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:15:05.027598 containerd[1462]: time="2025-01-29T12:15:05.027437790Z" level=info msg="Forcibly stopping sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\"" Jan 29 12:15:05.027798 containerd[1462]: time="2025-01-29T12:15:05.027674855Z" level=info msg="TearDown network for sandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" successfully" Jan 29 12:15:05.032774 containerd[1462]: time="2025-01-29T12:15:05.032692108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.032774 containerd[1462]: time="2025-01-29T12:15:05.032755005Z" level=info msg="RemovePodSandbox \"25ffe3c9791e2db872806d3899a41fad417c3b0f0b147b4e6c386a26bb8c291b\" returns successfully" Jan 29 12:15:05.033649 containerd[1462]: time="2025-01-29T12:15:05.033455890Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:15:05.033649 containerd[1462]: time="2025-01-29T12:15:05.033591725Z" level=info msg="TearDown network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" successfully" Jan 29 12:15:05.033649 containerd[1462]: time="2025-01-29T12:15:05.033612784Z" level=info msg="StopPodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" returns successfully" Jan 29 12:15:05.034720 containerd[1462]: time="2025-01-29T12:15:05.034385474Z" level=info msg="RemovePodSandbox for \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:15:05.034720 containerd[1462]: time="2025-01-29T12:15:05.034455536Z" level=info msg="Forcibly stopping sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\"" Jan 29 12:15:05.034720 containerd[1462]: time="2025-01-29T12:15:05.034585279Z" level=info msg="TearDown network for sandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" successfully" Jan 29 12:15:05.038843 containerd[1462]: time="2025-01-29T12:15:05.038789326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.038906 containerd[1462]: time="2025-01-29T12:15:05.038862042Z" level=info msg="RemovePodSandbox \"cb6c1d799d5e31a880be76cbd83a2f89c65c3c0673a1538068c1fca9fdb929dd\" returns successfully" Jan 29 12:15:05.039410 containerd[1462]: time="2025-01-29T12:15:05.039200458Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" Jan 29 12:15:05.039410 containerd[1462]: time="2025-01-29T12:15:05.039274377Z" level=info msg="TearDown network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" successfully" Jan 29 12:15:05.039410 containerd[1462]: time="2025-01-29T12:15:05.039286349Z" level=info msg="StopPodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" returns successfully" Jan 29 12:15:05.039655 containerd[1462]: time="2025-01-29T12:15:05.039607211Z" level=info msg="RemovePodSandbox for \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" Jan 29 12:15:05.039696 containerd[1462]: time="2025-01-29T12:15:05.039659108Z" level=info msg="Forcibly stopping sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\"" Jan 29 12:15:05.039852 containerd[1462]: time="2025-01-29T12:15:05.039778262Z" level=info msg="TearDown network for sandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" successfully" Jan 29 12:15:05.043825 containerd[1462]: time="2025-01-29T12:15:05.043770000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.043906 containerd[1462]: time="2025-01-29T12:15:05.043839721Z" level=info msg="RemovePodSandbox \"95073791455c447c6f6d56345911526b1ccbe0feafa64f389051fb1f03ec3103\" returns successfully" Jan 29 12:15:05.044399 containerd[1462]: time="2025-01-29T12:15:05.044190600Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\"" Jan 29 12:15:05.044399 containerd[1462]: time="2025-01-29T12:15:05.044265240Z" level=info msg="TearDown network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" successfully" Jan 29 12:15:05.044399 containerd[1462]: time="2025-01-29T12:15:05.044277593Z" level=info msg="StopPodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" returns successfully" Jan 29 12:15:05.044645 containerd[1462]: time="2025-01-29T12:15:05.044598094Z" level=info msg="RemovePodSandbox for \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\"" Jan 29 12:15:05.044695 containerd[1462]: time="2025-01-29T12:15:05.044650022Z" level=info msg="Forcibly stopping sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\"" Jan 29 12:15:05.044848 containerd[1462]: time="2025-01-29T12:15:05.044776389Z" level=info msg="TearDown network for sandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" successfully" Jan 29 12:15:05.049582 containerd[1462]: time="2025-01-29T12:15:05.049499870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.049677 containerd[1462]: time="2025-01-29T12:15:05.049582205Z" level=info msg="RemovePodSandbox \"aed0c7869689ac085fcbcd1491174211e3e25856ebb9cb6b8d60e24122055480\" returns successfully" Jan 29 12:15:05.050195 containerd[1462]: time="2025-01-29T12:15:05.050047898Z" level=info msg="StopPodSandbox for \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\"" Jan 29 12:15:05.050195 containerd[1462]: time="2025-01-29T12:15:05.050129391Z" level=info msg="TearDown network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" successfully" Jan 29 12:15:05.050195 containerd[1462]: time="2025-01-29T12:15:05.050141474Z" level=info msg="StopPodSandbox for \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" returns successfully" Jan 29 12:15:05.050624 containerd[1462]: time="2025-01-29T12:15:05.050586088Z" level=info msg="RemovePodSandbox for \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\"" Jan 29 12:15:05.050672 containerd[1462]: time="2025-01-29T12:15:05.050638296Z" level=info msg="Forcibly stopping sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\"" Jan 29 12:15:05.050835 containerd[1462]: time="2025-01-29T12:15:05.050759895Z" level=info msg="TearDown network for sandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" successfully" Jan 29 12:15:05.055522 containerd[1462]: time="2025-01-29T12:15:05.055470411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.055631 containerd[1462]: time="2025-01-29T12:15:05.055548067Z" level=info msg="RemovePodSandbox \"f418ba9570ce9058bf5f5fc58813b8d18e8eb149bcfd9568c2b3492b2e8d7922\" returns successfully" Jan 29 12:15:05.056239 containerd[1462]: time="2025-01-29T12:15:05.056074024Z" level=info msg="StopPodSandbox for \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\"" Jan 29 12:15:05.056239 containerd[1462]: time="2025-01-29T12:15:05.056170986Z" level=info msg="TearDown network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\" successfully" Jan 29 12:15:05.056239 containerd[1462]: time="2025-01-29T12:15:05.056184261Z" level=info msg="StopPodSandbox for \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\" returns successfully" Jan 29 12:15:05.056975 containerd[1462]: time="2025-01-29T12:15:05.056497058Z" level=info msg="RemovePodSandbox for \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\"" Jan 29 12:15:05.056975 containerd[1462]: time="2025-01-29T12:15:05.056543936Z" level=info msg="Forcibly stopping sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\"" Jan 29 12:15:05.056975 containerd[1462]: time="2025-01-29T12:15:05.056619998Z" level=info msg="TearDown network for sandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\" successfully" Jan 29 12:15:05.059586 containerd[1462]: time="2025-01-29T12:15:05.059469805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.059586 containerd[1462]: time="2025-01-29T12:15:05.059514769Z" level=info msg="RemovePodSandbox \"2415177dcacad0ec540b58172b88ce822d2a3c9e0812cf20d4b615f735f99dd9\" returns successfully" Jan 29 12:15:05.059919 containerd[1462]: time="2025-01-29T12:15:05.059797430Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:15:05.059919 containerd[1462]: time="2025-01-29T12:15:05.059871709Z" level=info msg="TearDown network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" successfully" Jan 29 12:15:05.059919 containerd[1462]: time="2025-01-29T12:15:05.059904400Z" level=info msg="StopPodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" returns successfully" Jan 29 12:15:05.060374 containerd[1462]: time="2025-01-29T12:15:05.060294432Z" level=info msg="RemovePodSandbox for \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:15:05.060374 containerd[1462]: time="2025-01-29T12:15:05.060315782Z" level=info msg="Forcibly stopping sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\"" Jan 29 12:15:05.060558 containerd[1462]: time="2025-01-29T12:15:05.060376315Z" level=info msg="TearDown network for sandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" successfully" Jan 29 12:15:05.063725 containerd[1462]: time="2025-01-29T12:15:05.063656930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.063725 containerd[1462]: time="2025-01-29T12:15:05.063693358Z" level=info msg="RemovePodSandbox \"1c92bee1f2619304ae5a946d86c331587ba4a778a2d980601b9605c996717942\" returns successfully" Jan 29 12:15:05.064135 containerd[1462]: time="2025-01-29T12:15:05.063994784Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" Jan 29 12:15:05.064135 containerd[1462]: time="2025-01-29T12:15:05.064062120Z" level=info msg="TearDown network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" successfully" Jan 29 12:15:05.064135 containerd[1462]: time="2025-01-29T12:15:05.064073020Z" level=info msg="StopPodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" returns successfully" Jan 29 12:15:05.064953 containerd[1462]: time="2025-01-29T12:15:05.064574111Z" level=info msg="RemovePodSandbox for \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" Jan 29 12:15:05.064953 containerd[1462]: time="2025-01-29T12:15:05.064718262Z" level=info msg="Forcibly stopping sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\"" Jan 29 12:15:05.064953 containerd[1462]: time="2025-01-29T12:15:05.064856902Z" level=info msg="TearDown network for sandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" successfully" Jan 29 12:15:05.070462 containerd[1462]: time="2025-01-29T12:15:05.070406343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.070721 containerd[1462]: time="2025-01-29T12:15:05.070488617Z" level=info msg="RemovePodSandbox \"7459c419b22e7d97f1fc8640ac50561054a3ba2f56983dfe415496a5123b778c\" returns successfully" Jan 29 12:15:05.071090 containerd[1462]: time="2025-01-29T12:15:05.070916641Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\"" Jan 29 12:15:05.071432 containerd[1462]: time="2025-01-29T12:15:05.071344162Z" level=info msg="TearDown network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" successfully" Jan 29 12:15:05.071432 containerd[1462]: time="2025-01-29T12:15:05.071362938Z" level=info msg="StopPodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" returns successfully" Jan 29 12:15:05.072217 containerd[1462]: time="2025-01-29T12:15:05.072196291Z" level=info msg="RemovePodSandbox for \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\"" Jan 29 12:15:05.073155 containerd[1462]: time="2025-01-29T12:15:05.072296859Z" level=info msg="Forcibly stopping sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\"" Jan 29 12:15:05.073155 containerd[1462]: time="2025-01-29T12:15:05.072377571Z" level=info msg="TearDown network for sandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" successfully" Jan 29 12:15:05.075896 containerd[1462]: time="2025-01-29T12:15:05.075739798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.075896 containerd[1462]: time="2025-01-29T12:15:05.075796605Z" level=info msg="RemovePodSandbox \"6e91930969ba061b9231288aa8e59157a0c22402a224841386f4f33672b70055\" returns successfully" Jan 29 12:15:05.076436 containerd[1462]: time="2025-01-29T12:15:05.076377766Z" level=info msg="StopPodSandbox for \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\"" Jan 29 12:15:05.076630 containerd[1462]: time="2025-01-29T12:15:05.076543887Z" level=info msg="TearDown network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" successfully" Jan 29 12:15:05.076692 containerd[1462]: time="2025-01-29T12:15:05.076636621Z" level=info msg="StopPodSandbox for \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" returns successfully" Jan 29 12:15:05.077039 containerd[1462]: time="2025-01-29T12:15:05.077022485Z" level=info msg="RemovePodSandbox for \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\"" Jan 29 12:15:05.077304 containerd[1462]: time="2025-01-29T12:15:05.077135727Z" level=info msg="Forcibly stopping sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\"" Jan 29 12:15:05.077304 containerd[1462]: time="2025-01-29T12:15:05.077237198Z" level=info msg="TearDown network for sandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" successfully" Jan 29 12:15:05.080646 containerd[1462]: time="2025-01-29T12:15:05.080590058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.080646 containerd[1462]: time="2025-01-29T12:15:05.080626216Z" level=info msg="RemovePodSandbox \"a6b2c0bc777e88397e8dfcde754e4247e2840a2c817f2e814913448928a9712b\" returns successfully" Jan 29 12:15:05.081521 containerd[1462]: time="2025-01-29T12:15:05.080957588Z" level=info msg="StopPodSandbox for \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\"" Jan 29 12:15:05.081521 containerd[1462]: time="2025-01-29T12:15:05.081027119Z" level=info msg="TearDown network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\" successfully" Jan 29 12:15:05.081521 containerd[1462]: time="2025-01-29T12:15:05.081038450Z" level=info msg="StopPodSandbox for \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\" returns successfully" Jan 29 12:15:05.081521 containerd[1462]: time="2025-01-29T12:15:05.081408554Z" level=info msg="RemovePodSandbox for \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\"" Jan 29 12:15:05.081521 containerd[1462]: time="2025-01-29T12:15:05.081451184Z" level=info msg="Forcibly stopping sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\"" Jan 29 12:15:05.081653 containerd[1462]: time="2025-01-29T12:15:05.081576108Z" level=info msg="TearDown network for sandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\" successfully" Jan 29 12:15:05.086229 containerd[1462]: time="2025-01-29T12:15:05.086174706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:15:05.086328 containerd[1462]: time="2025-01-29T12:15:05.086253624Z" level=info msg="RemovePodSandbox \"4f039686865eae7f21d09f9e835bd88b284705e259100ae4c21d3d075f6189b8\" returns successfully" Jan 29 12:15:05.987495 kubelet[1849]: E0129 12:15:05.987419 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:06.988601 kubelet[1849]: E0129 12:15:06.988470 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:07.989698 kubelet[1849]: E0129 12:15:07.989601 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:08.990659 kubelet[1849]: E0129 12:15:08.990602 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:09.990923 kubelet[1849]: E0129 12:15:09.990794 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:10.991736 kubelet[1849]: E0129 12:15:10.991582 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:11.992010 kubelet[1849]: E0129 12:15:11.991839 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:12.992555 kubelet[1849]: E0129 12:15:12.992432 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:13.993293 kubelet[1849]: E0129 12:15:13.993166 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:14.994322 kubelet[1849]: E0129 12:15:14.994241 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:15.834101 systemd[1]: Created slice kubepods-besteffort-podf4e29477_17dd_405e_9082_53302aad88f3.slice - libcontainer container kubepods-besteffort-podf4e29477_17dd_405e_9082_53302aad88f3.slice. Jan 29 12:15:15.857366 kubelet[1849]: I0129 12:15:15.857135 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27s8\" (UniqueName: \"kubernetes.io/projected/f4e29477-17dd-405e-9082-53302aad88f3-kube-api-access-d27s8\") pod \"test-pod-1\" (UID: \"f4e29477-17dd-405e-9082-53302aad88f3\") " pod="default/test-pod-1" Jan 29 12:15:15.857622 kubelet[1849]: I0129 12:15:15.857407 1849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f1d8804-f6d2-4a83-b4a1-770bcc52245c\" (UniqueName: \"kubernetes.io/nfs/f4e29477-17dd-405e-9082-53302aad88f3-pvc-9f1d8804-f6d2-4a83-b4a1-770bcc52245c\") pod \"test-pod-1\" (UID: \"f4e29477-17dd-405e-9082-53302aad88f3\") " pod="default/test-pod-1" Jan 29 12:15:15.995766 kubelet[1849]: E0129 12:15:15.995396 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:16.030039 kernel: FS-Cache: Loaded Jan 29 12:15:16.130794 kernel: RPC: Registered named UNIX socket transport module. Jan 29 12:15:16.131075 kernel: RPC: Registered udp transport module. Jan 29 12:15:16.131160 kernel: RPC: Registered tcp transport module. Jan 29 12:15:16.131381 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 12:15:16.131517 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 12:15:16.561365 kernel: NFS: Registering the id_resolver key type Jan 29 12:15:16.561486 kernel: Key type id_resolver registered Jan 29 12:15:16.562886 kernel: Key type id_legacy registered Jan 29 12:15:16.606724 nfsidmap[3663]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 12:15:16.619500 nfsidmap[3664]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 29 12:15:16.743631 containerd[1462]: time="2025-01-29T12:15:16.743545653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f4e29477-17dd-405e-9082-53302aad88f3,Namespace:default,Attempt:0,}" Jan 29 12:15:16.961603 systemd-networkd[1373]: cali5ec59c6bf6e: Link UP Jan 29 12:15:16.963788 systemd-networkd[1373]: cali5ec59c6bf6e: Gained carrier Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.869 [INFO][3666] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.137-k8s-test--pod--1-eth0 default f4e29477-17dd-405e-9082-53302aad88f3 1380 0 2025-01-29 12:14:47 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.137 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.869 [INFO][3666] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-eth0" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.900 [INFO][3676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" HandleID="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Workload="172.24.4.137-k8s-test--pod--1-eth0" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.919 [INFO][3676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" HandleID="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Workload="172.24.4.137-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000337240), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.137", "pod":"test-pod-1", "timestamp":"2025-01-29 12:15:16.900969974 +0000 UTC"}, Hostname:"172.24.4.137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.919 [INFO][3676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.919 [INFO][3676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.919 [INFO][3676] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.137' Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.923 [INFO][3676] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.928 [INFO][3676] ipam/ipam.go 372: Looking up existing affinities for host host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.933 [INFO][3676] ipam/ipam.go 489: Trying affinity for 192.168.66.64/26 host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.936 [INFO][3676] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.938 [INFO][3676] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.64/26 host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.939 [INFO][3676] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.64/26 handle="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.940 [INFO][3676] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.946 [INFO][3676] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.64/26 handle="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.956 [INFO][3676] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.68/26] block=192.168.66.64/26 handle="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.956 [INFO][3676] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.68/26] handle="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" host="172.24.4.137" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.956 [INFO][3676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.956 [INFO][3676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.68/26] IPv6=[] ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" HandleID="k8s-pod-network.7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Workload="172.24.4.137-k8s-test--pod--1-eth0" Jan 29 12:15:16.986122 containerd[1462]: 2025-01-29 12:15:16.958 [INFO][3666] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f4e29477-17dd-405e-9082-53302aad88f3", ResourceVersion:"1380", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:15:16.987434 containerd[1462]: 2025-01-29 12:15:16.959 [INFO][3666] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.68/32] ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-eth0" Jan 29 12:15:16.987434 containerd[1462]: 2025-01-29 12:15:16.959 [INFO][3666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-eth0" Jan 29 12:15:16.987434 containerd[1462]: 2025-01-29 12:15:16.961 [INFO][3666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-eth0" Jan 29 12:15:16.987434 containerd[1462]: 2025-01-29 12:15:16.964 [INFO][3666] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.137-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f4e29477-17dd-405e-9082-53302aad88f3", ResourceVersion:"1380", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.137", ContainerID:"7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e6:83:df:a8:8f:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:15:16.987434 containerd[1462]: 2025-01-29 12:15:16.984 [INFO][3666] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.137-k8s-test--pod--1-eth0" Jan 29 12:15:16.996581 kubelet[1849]: E0129 12:15:16.996530 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:17.016394 containerd[1462]: time="2025-01-29T12:15:17.016087520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:17.017442 containerd[1462]: time="2025-01-29T12:15:17.016236441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:17.019129 containerd[1462]: time="2025-01-29T12:15:17.017174911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:17.019129 containerd[1462]: time="2025-01-29T12:15:17.019040200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:17.052219 systemd[1]: Started cri-containerd-7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff.scope - libcontainer container 7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff. Jan 29 12:15:17.103086 containerd[1462]: time="2025-01-29T12:15:17.102354357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f4e29477-17dd-405e-9082-53302aad88f3,Namespace:default,Attempt:0,} returns sandbox id \"7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff\"" Jan 29 12:15:17.104995 containerd[1462]: time="2025-01-29T12:15:17.104908345Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:15:17.582400 containerd[1462]: time="2025-01-29T12:15:17.582160660Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:17.584014 containerd[1462]: time="2025-01-29T12:15:17.583841710Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 12:15:17.592375 containerd[1462]: time="2025-01-29T12:15:17.592186491Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 487.031121ms" Jan 29 12:15:17.592375 containerd[1462]: time="2025-01-29T12:15:17.592250532Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 12:15:17.595967 containerd[1462]: time="2025-01-29T12:15:17.595837076Z" level=info msg="CreateContainer within sandbox \"7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 12:15:17.629711 containerd[1462]: time="2025-01-29T12:15:17.629512004Z" level=info msg="CreateContainer within sandbox \"7813adf7f289aa78740e13da8faf27d1ed32813c0e9df310c90b13327147dcff\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ec38191c50f632a86da824fec5d2e1785bf2a967eb044fac149f1f69c900df05\"" Jan 29 12:15:17.630995 containerd[1462]: time="2025-01-29T12:15:17.630452197Z" level=info msg="StartContainer for \"ec38191c50f632a86da824fec5d2e1785bf2a967eb044fac149f1f69c900df05\"" Jan 29 12:15:17.695296 systemd[1]: Started cri-containerd-ec38191c50f632a86da824fec5d2e1785bf2a967eb044fac149f1f69c900df05.scope - libcontainer container ec38191c50f632a86da824fec5d2e1785bf2a967eb044fac149f1f69c900df05. Jan 29 12:15:17.742098 containerd[1462]: time="2025-01-29T12:15:17.742012884Z" level=info msg="StartContainer for \"ec38191c50f632a86da824fec5d2e1785bf2a967eb044fac149f1f69c900df05\" returns successfully" Jan 29 12:15:17.997870 kubelet[1849]: E0129 12:15:17.996740 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:18.579595 systemd-networkd[1373]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 12:15:18.997879 kubelet[1849]: E0129 12:15:18.997646 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:19.998665 kubelet[1849]: E0129 12:15:19.998586 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:20.999827 kubelet[1849]: E0129 12:15:20.999750 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:22.000984 kubelet[1849]: E0129 12:15:22.000834 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:23.002177 kubelet[1849]: E0129 12:15:23.002050 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:24.003354 kubelet[1849]: E0129 12:15:24.003247 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:24.929599 kubelet[1849]: E0129 12:15:24.929514 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:25.004137 kubelet[1849]: E0129 12:15:25.004026 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:26.004783 kubelet[1849]: E0129 12:15:26.004691 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:27.006067 kubelet[1849]: E0129 12:15:27.005879 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:28.006866 kubelet[1849]: E0129 12:15:28.006741 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:29.007527 kubelet[1849]: E0129 12:15:29.007418 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:30.008202 kubelet[1849]: E0129 12:15:30.008115 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:31.008984 kubelet[1849]: E0129 12:15:31.008839 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:32.009574 kubelet[1849]: E0129 12:15:32.009477 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:33.010803 kubelet[1849]: E0129 12:15:33.010697 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:34.011676 kubelet[1849]: E0129 12:15:34.011561 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:35.012229 kubelet[1849]: E0129 12:15:35.012140 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:36.012711 kubelet[1849]: E0129 12:15:36.012609 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:37.013255 kubelet[1849]: E0129 12:15:37.013107 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:38.014193 kubelet[1849]: E0129 12:15:38.014053 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:39.014498 kubelet[1849]: E0129 12:15:39.014275 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:40.015003 kubelet[1849]: E0129 12:15:40.014905 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:41.016326 kubelet[1849]: E0129 12:15:41.016159 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:42.016537 kubelet[1849]: E0129 12:15:42.016410 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:43.016759 kubelet[1849]: E0129 12:15:43.016660 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:44.017795 kubelet[1849]: E0129 12:15:44.017683 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:44.930171 kubelet[1849]: E0129 12:15:44.930055 1849 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:45.018567 kubelet[1849]: E0129 12:15:45.018485 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:46.018740 kubelet[1849]: E0129 12:15:46.018646 1849 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"