Jul 7 01:24:41.057007 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 01:24:41.057032 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:24:41.057042 kernel: BIOS-provided physical RAM map: Jul 7 01:24:41.057049 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 01:24:41.057056 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 01:24:41.057081 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 01:24:41.057089 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 7 01:24:41.057096 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 7 01:24:41.057120 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 01:24:41.057127 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 01:24:41.057135 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 7 01:24:41.057142 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 01:24:41.057149 kernel: NX (Execute Disable) protection: active Jul 7 01:24:41.057156 kernel: APIC: Static calls initialized Jul 7 01:24:41.057169 kernel: SMBIOS 3.0.0 present. Jul 7 01:24:41.057177 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 7 01:24:41.057184 kernel: Hypervisor detected: KVM Jul 7 01:24:41.057192 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 01:24:41.057199 kernel: kvm-clock: using sched offset of 3389081409 cycles Jul 7 01:24:41.057209 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 01:24:41.057217 kernel: tsc: Detected 1996.249 MHz processor Jul 7 01:24:41.057225 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 01:24:41.057233 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 01:24:41.057241 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 7 01:24:41.057249 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 01:24:41.057257 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 01:24:41.057265 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 7 01:24:41.057273 kernel: ACPI: Early table checksum verification disabled Jul 7 01:24:41.057282 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 7 01:24:41.057290 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:24:41.057298 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:24:41.057306 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:24:41.057313 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 7 01:24:41.057321 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:24:41.057329 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:24:41.057336 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 7 01:24:41.057344 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 7 01:24:41.057354 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 7 01:24:41.057361 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 7 01:24:41.057369 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 7 01:24:41.057380 kernel: No NUMA configuration found Jul 7 01:24:41.057388 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 7 01:24:41.057396 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jul 7 01:24:41.057406 kernel: Zone ranges: Jul 7 01:24:41.057414 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 01:24:41.057422 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 01:24:41.057430 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 7 01:24:41.057438 kernel: Movable zone start for each node Jul 7 01:24:41.057446 kernel: Early memory node ranges Jul 7 01:24:41.057454 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 01:24:41.057462 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 7 01:24:41.057472 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 7 01:24:41.057480 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 7 01:24:41.057488 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 01:24:41.057496 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 01:24:41.057504 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 7 01:24:41.057513 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 01:24:41.057521 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 01:24:41.057529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 01:24:41.057537 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 01:24:41.057547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 01:24:41.057555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 01:24:41.057563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 01:24:41.057571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 01:24:41.057579 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 01:24:41.057587 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 7 01:24:41.057595 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 01:24:41.057603 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 7 01:24:41.057611 kernel: Booting paravirtualized kernel on KVM Jul 7 01:24:41.057622 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 01:24:41.057630 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 01:24:41.057638 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 7 01:24:41.057646 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 7 01:24:41.057654 kernel: pcpu-alloc: [0] 0 1 Jul 7 01:24:41.057662 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 01:24:41.057671 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:24:41.057680 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 01:24:41.057690 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 01:24:41.057698 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 01:24:41.057706 kernel: Fallback order for Node 0: 0 Jul 7 01:24:41.057714 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jul 7 01:24:41.057722 kernel: Policy zone: Normal Jul 7 01:24:41.057730 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 01:24:41.057738 kernel: software IO TLB: area num 2. Jul 7 01:24:41.057747 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 227308K reserved, 0K cma-reserved) Jul 7 01:24:41.057755 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 01:24:41.057765 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 01:24:41.057773 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 01:24:41.057781 kernel: Dynamic Preempt: voluntary Jul 7 01:24:41.057789 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 01:24:41.057798 kernel: rcu: RCU event tracing is enabled. Jul 7 01:24:41.057806 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 01:24:41.057814 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 01:24:41.057823 kernel: Rude variant of Tasks RCU enabled. Jul 7 01:24:41.057831 kernel: Tracing variant of Tasks RCU enabled. Jul 7 01:24:41.057839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 01:24:41.057849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 01:24:41.057857 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 01:24:41.057865 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 01:24:41.057873 kernel: Console: colour VGA+ 80x25 Jul 7 01:24:41.057881 kernel: printk: console [tty0] enabled Jul 7 01:24:41.057889 kernel: printk: console [ttyS0] enabled Jul 7 01:24:41.057897 kernel: ACPI: Core revision 20230628 Jul 7 01:24:41.057905 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 01:24:41.057913 kernel: x2apic enabled Jul 7 01:24:41.057923 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 01:24:41.057932 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 01:24:41.057940 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 7 01:24:41.057948 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 7 01:24:41.057956 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 01:24:41.057964 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 01:24:41.057972 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 01:24:41.057980 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 01:24:41.057988 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 01:24:41.057998 kernel: Speculative Store Bypass: Vulnerable Jul 7 01:24:41.058006 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 7 01:24:41.058014 kernel: Freeing SMP alternatives memory: 32K Jul 7 01:24:41.058023 kernel: pid_max: default: 32768 minimum: 301 Jul 7 01:24:41.058039 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 01:24:41.058049 kernel: landlock: Up and running. Jul 7 01:24:41.058057 kernel: SELinux: Initializing. Jul 7 01:24:41.058091 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 01:24:41.058100 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 01:24:41.058109 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 7 01:24:41.058118 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:24:41.058130 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:24:41.058138 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:24:41.058147 kernel: Performance Events: AMD PMU driver. Jul 7 01:24:41.058155 kernel: ... version: 0 Jul 7 01:24:41.058164 kernel: ... bit width: 48 Jul 7 01:24:41.058174 kernel: ... generic registers: 4 Jul 7 01:24:41.058183 kernel: ... value mask: 0000ffffffffffff Jul 7 01:24:41.058192 kernel: ... max period: 00007fffffffffff Jul 7 01:24:41.058200 kernel: ... fixed-purpose events: 0 Jul 7 01:24:41.058209 kernel: ... event mask: 000000000000000f Jul 7 01:24:41.058217 kernel: signal: max sigframe size: 1440 Jul 7 01:24:41.058226 kernel: rcu: Hierarchical SRCU implementation. Jul 7 01:24:41.058234 kernel: rcu: Max phase no-delay instances is 400. Jul 7 01:24:41.058243 kernel: smp: Bringing up secondary CPUs ... Jul 7 01:24:41.058253 kernel: smpboot: x86: Booting SMP configuration: Jul 7 01:24:41.058262 kernel: .... node #0, CPUs: #1 Jul 7 01:24:41.058270 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 01:24:41.058279 kernel: smpboot: Max logical packages: 2 Jul 7 01:24:41.058287 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 7 01:24:41.058296 kernel: devtmpfs: initialized Jul 7 01:24:41.058304 kernel: x86/mm: Memory block size: 128MB Jul 7 01:24:41.058313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 01:24:41.058321 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 01:24:41.058330 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 01:24:41.058340 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 01:24:41.058349 kernel: audit: initializing netlink subsys (disabled) Jul 7 01:24:41.058358 kernel: audit: type=2000 audit(1751851480.172:1): state=initialized audit_enabled=0 res=1 Jul 7 01:24:41.058366 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 01:24:41.058374 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 01:24:41.058383 kernel: cpuidle: using governor menu Jul 7 01:24:41.058391 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 01:24:41.058400 kernel: dca service started, version 1.12.1 Jul 7 01:24:41.058409 kernel: PCI: Using configuration type 1 for base access Jul 7 01:24:41.058419 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 01:24:41.058428 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 01:24:41.058437 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 01:24:41.058445 kernel: ACPI: Added _OSI(Module Device) Jul 7 01:24:41.058454 kernel: ACPI: Added _OSI(Processor Device) Jul 7 01:24:41.058462 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 01:24:41.058471 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 01:24:41.058479 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 01:24:41.058488 kernel: ACPI: Interpreter enabled Jul 7 01:24:41.058498 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 01:24:41.058507 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 01:24:41.058516 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 01:24:41.058524 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 01:24:41.058533 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 01:24:41.058541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 01:24:41.058677 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:24:41.058774 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 01:24:41.058868 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 01:24:41.058881 kernel: acpiphp: Slot [3] registered Jul 7 01:24:41.058889 kernel: acpiphp: Slot [4] registered Jul 7 01:24:41.058898 kernel: acpiphp: Slot [5] registered Jul 7 01:24:41.058906 kernel: acpiphp: Slot [6] registered Jul 7 01:24:41.058915 kernel: acpiphp: Slot [7] registered Jul 7 01:24:41.058923 kernel: acpiphp: Slot [8] registered Jul 7 01:24:41.058931 kernel: acpiphp: Slot [9] registered Jul 7 01:24:41.058943 kernel: acpiphp: Slot [10] registered Jul 7 01:24:41.058951 kernel: acpiphp: Slot [11] registered Jul 7 01:24:41.058959 kernel: acpiphp: Slot [12] registered Jul 7 01:24:41.058968 kernel: acpiphp: Slot [13] registered Jul 7 01:24:41.058976 kernel: acpiphp: Slot [14] registered Jul 7 01:24:41.058984 kernel: acpiphp: Slot [15] registered Jul 7 01:24:41.058993 kernel: acpiphp: Slot [16] registered Jul 7 01:24:41.059001 kernel: acpiphp: Slot [17] registered Jul 7 01:24:41.059009 kernel: acpiphp: Slot [18] registered Jul 7 01:24:41.059019 kernel: acpiphp: Slot [19] registered Jul 7 01:24:41.059028 kernel: acpiphp: Slot [20] registered Jul 7 01:24:41.059036 kernel: acpiphp: Slot [21] registered Jul 7 01:24:41.059044 kernel: acpiphp: Slot [22] registered Jul 7 01:24:41.059053 kernel: acpiphp: Slot [23] registered Jul 7 01:24:41.059087 kernel: acpiphp: Slot [24] registered Jul 7 01:24:41.059097 kernel: acpiphp: Slot [25] registered Jul 7 01:24:41.059105 kernel: acpiphp: Slot [26] registered Jul 7 01:24:41.059114 kernel: acpiphp: Slot [27] registered Jul 7 01:24:41.059122 kernel: acpiphp: Slot [28] registered Jul 7 01:24:41.059133 kernel: acpiphp: Slot [29] registered Jul 7 01:24:41.059142 kernel: acpiphp: Slot [30] registered Jul 7 01:24:41.059150 kernel: acpiphp: Slot [31] registered Jul 7 01:24:41.059158 kernel: PCI host bridge to bus 0000:00 Jul 7 01:24:41.059258 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 01:24:41.059340 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 01:24:41.059419 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 01:24:41.059499 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 01:24:41.059582 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 7 01:24:41.059661 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 01:24:41.059764 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 7 01:24:41.059865 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 7 01:24:41.059965 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 7 01:24:41.060056 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 7 01:24:41.060182 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 7 01:24:41.060272 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 7 01:24:41.060363 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 7 01:24:41.060454 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 7 01:24:41.060557 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 7 01:24:41.060649 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 7 01:24:41.060741 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 7 01:24:41.060848 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 7 01:24:41.060944 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 7 01:24:41.061039 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jul 7 01:24:41.061170 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 7 01:24:41.061263 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 7 01:24:41.061353 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 01:24:41.061458 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 7 01:24:41.061554 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 7 01:24:41.061645 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 7 01:24:41.061737 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jul 7 01:24:41.061836 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 7 01:24:41.061942 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 7 01:24:41.062044 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 7 01:24:41.062176 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 7 01:24:41.062264 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 7 01:24:41.062360 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 7 01:24:41.062451 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 7 01:24:41.062542 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 7 01:24:41.062643 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 01:24:41.062735 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 7 01:24:41.062832 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jul 7 01:24:41.062924 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jul 7 01:24:41.062937 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 01:24:41.062946 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 01:24:41.062954 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 01:24:41.062963 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 01:24:41.062972 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 01:24:41.062981 kernel: iommu: Default domain type: Translated Jul 7 01:24:41.062989 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 01:24:41.063001 kernel: PCI: Using ACPI for IRQ routing Jul 7 01:24:41.063010 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 01:24:41.063018 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 01:24:41.063027 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 7 01:24:41.063234 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 7 01:24:41.063330 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 7 01:24:41.063421 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 01:24:41.063433 kernel: vgaarb: loaded Jul 7 01:24:41.063446 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 01:24:41.063455 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 01:24:41.063463 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 01:24:41.063472 kernel: pnp: PnP ACPI init Jul 7 01:24:41.063564 kernel: pnp 00:03: [dma 2] Jul 7 01:24:41.063578 kernel: pnp: PnP ACPI: found 5 devices Jul 7 01:24:41.063587 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 01:24:41.063596 kernel: NET: Registered PF_INET protocol family Jul 7 01:24:41.063605 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 01:24:41.063617 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 01:24:41.063626 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 01:24:41.063634 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 01:24:41.063643 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 01:24:41.063651 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 01:24:41.063660 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 01:24:41.063669 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 01:24:41.063677 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 01:24:41.063686 kernel: NET: Registered PF_XDP protocol family Jul 7 01:24:41.063770 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 01:24:41.063850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 01:24:41.063931 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 01:24:41.064011 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 7 01:24:41.064153 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 7 01:24:41.064247 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 7 01:24:41.064336 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 01:24:41.064353 kernel: PCI: CLS 0 bytes, default 64 Jul 7 01:24:41.064362 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 01:24:41.064371 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 7 01:24:41.064379 kernel: Initialise system trusted keyrings Jul 7 01:24:41.064388 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 01:24:41.064397 kernel: Key type asymmetric registered Jul 7 01:24:41.064405 kernel: Asymmetric key parser 'x509' registered Jul 7 01:24:41.064414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 01:24:41.064422 kernel: io scheduler mq-deadline registered Jul 7 01:24:41.064433 kernel: io scheduler kyber registered Jul 7 01:24:41.064441 kernel: io scheduler bfq registered Jul 7 01:24:41.064450 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 01:24:41.064459 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 7 01:24:41.064468 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 01:24:41.064476 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 01:24:41.064485 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 01:24:41.064494 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 01:24:41.064503 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 01:24:41.064513 kernel: random: crng init done Jul 7 01:24:41.064522 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 01:24:41.064530 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 01:24:41.064539 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 01:24:41.064635 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 01:24:41.064649 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 01:24:41.064727 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 01:24:41.064807 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T01:24:40 UTC (1751851480) Jul 7 01:24:41.064892 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 01:24:41.064905 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 01:24:41.064914 kernel: NET: Registered PF_INET6 protocol family Jul 7 01:24:41.064922 kernel: Segment Routing with IPv6 Jul 7 01:24:41.064931 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 01:24:41.064939 kernel: NET: Registered PF_PACKET protocol family Jul 7 01:24:41.064948 kernel: Key type dns_resolver registered Jul 7 01:24:41.064956 kernel: IPI shorthand broadcast: enabled Jul 7 01:24:41.064965 kernel: sched_clock: Marking stable (987020073, 173120836)->(1186848845, -26707936) Jul 7 01:24:41.064976 kernel: registered taskstats version 1 Jul 7 01:24:41.064985 kernel: Loading compiled-in X.509 certificates Jul 7 01:24:41.064993 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 01:24:41.065002 kernel: Key type .fscrypt registered Jul 7 01:24:41.065010 kernel: Key type fscrypt-provisioning registered Jul 7 01:24:41.065019 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 01:24:41.065027 kernel: ima: Allocated hash algorithm: sha1 Jul 7 01:24:41.065036 kernel: ima: No architecture policies found Jul 7 01:24:41.065044 kernel: clk: Disabling unused clocks Jul 7 01:24:41.065055 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 01:24:41.065080 kernel: Write protecting the kernel read-only data: 36864k Jul 7 01:24:41.065089 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 01:24:41.065097 kernel: Run /init as init process Jul 7 01:24:41.065121 kernel: with arguments: Jul 7 01:24:41.065130 kernel: /init Jul 7 01:24:41.065138 kernel: with environment: Jul 7 01:24:41.065146 kernel: HOME=/ Jul 7 01:24:41.065155 kernel: TERM=linux Jul 7 01:24:41.065171 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 01:24:41.065183 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:24:41.065193 systemd[1]: Detected virtualization kvm. Jul 7 01:24:41.065203 systemd[1]: Detected architecture x86-64. Jul 7 01:24:41.065212 systemd[1]: Running in initrd. Jul 7 01:24:41.065221 systemd[1]: No hostname configured, using default hostname. Jul 7 01:24:41.065230 systemd[1]: Hostname set to . Jul 7 01:24:41.065241 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:24:41.065251 systemd[1]: Queued start job for default target initrd.target. Jul 7 01:24:41.065260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:24:41.065269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:24:41.065279 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 01:24:41.065289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:24:41.065298 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 01:24:41.065316 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 01:24:41.065329 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 01:24:41.065339 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 01:24:41.065348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:24:41.065358 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:24:41.065369 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:24:41.065379 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:24:41.065388 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:24:41.065398 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:24:41.065407 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:24:41.065416 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:24:41.065426 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 01:24:41.065436 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 01:24:41.065445 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:24:41.065456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:24:41.065466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:24:41.065476 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:24:41.065485 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 01:24:41.065495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:24:41.065504 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 01:24:41.065513 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 01:24:41.065523 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:24:41.065532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:24:41.065561 systemd-journald[184]: Collecting audit messages is disabled. Jul 7 01:24:41.065583 systemd-journald[184]: Journal started Jul 7 01:24:41.065608 systemd-journald[184]: Runtime Journal (/run/log/journal/911b8cd2f7224ac2a3c17cdea5f32dad) is 8.0M, max 78.3M, 70.3M free. Jul 7 01:24:41.069099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:24:41.086377 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:24:41.086954 systemd-modules-load[185]: Inserted module 'overlay' Jul 7 01:24:41.093242 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 01:24:41.096936 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:24:41.101413 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 01:24:41.111361 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:24:41.115677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:24:41.165672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 01:24:41.165697 kernel: Bridge firewalling registered Jul 7 01:24:41.129154 systemd-modules-load[185]: Inserted module 'br_netfilter' Jul 7 01:24:41.165761 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:24:41.172244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:24:41.173108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:24:41.184305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:24:41.186247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:24:41.190205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:24:41.191735 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:24:41.203354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:24:41.212290 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:24:41.213118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:24:41.215003 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:24:41.219205 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 01:24:41.245017 systemd-resolved[215]: Positive Trust Anchors: Jul 7 01:24:41.245735 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:24:41.245778 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:24:41.248972 systemd-resolved[215]: Defaulting to hostname 'linux'. Jul 7 01:24:41.252402 dracut-cmdline[219]: dracut-dracut-053 Jul 7 01:24:41.249953 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:24:41.251377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:24:41.254436 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:24:41.333161 kernel: SCSI subsystem initialized Jul 7 01:24:41.344133 kernel: Loading iSCSI transport class v2.0-870. Jul 7 01:24:41.356153 kernel: iscsi: registered transport (tcp) Jul 7 01:24:41.378667 kernel: iscsi: registered transport (qla4xxx) Jul 7 01:24:41.378727 kernel: QLogic iSCSI HBA Driver Jul 7 01:24:41.437961 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 01:24:41.447348 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 01:24:41.500254 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 01:24:41.500355 kernel: device-mapper: uevent: version 1.0.3 Jul 7 01:24:41.503504 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 01:24:41.549146 kernel: raid6: sse2x4 gen() 13032 MB/s Jul 7 01:24:41.567118 kernel: raid6: sse2x2 gen() 14753 MB/s Jul 7 01:24:41.585421 kernel: raid6: sse2x1 gen() 9947 MB/s Jul 7 01:24:41.585497 kernel: raid6: using algorithm sse2x2 gen() 14753 MB/s Jul 7 01:24:41.604444 kernel: raid6: .... xor() 9431 MB/s, rmw enabled Jul 7 01:24:41.604513 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 01:24:41.626313 kernel: xor: measuring software checksum speed Jul 7 01:24:41.626380 kernel: prefetch64-sse : 18522 MB/sec Jul 7 01:24:41.629667 kernel: generic_sse : 15422 MB/sec Jul 7 01:24:41.629728 kernel: xor: using function: prefetch64-sse (18522 MB/sec) Jul 7 01:24:41.806637 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 01:24:41.823489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:24:41.830374 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:24:41.843137 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jul 7 01:24:41.847561 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:24:41.860440 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 01:24:41.877810 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jul 7 01:24:41.924764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:24:41.934354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:24:42.001942 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:24:42.010379 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 01:24:42.041096 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 01:24:42.043993 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:24:42.045988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:24:42.048788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:24:42.056313 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 01:24:42.079216 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:24:42.104092 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 7 01:24:42.112093 kernel: libata version 3.00 loaded. Jul 7 01:24:42.117096 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 7 01:24:42.121141 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 7 01:24:42.127541 kernel: scsi host0: ata_piix Jul 7 01:24:42.127704 kernel: scsi host1: ata_piix Jul 7 01:24:42.127816 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 7 01:24:42.129912 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 7 01:24:42.132221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:24:42.132368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:24:42.134711 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:24:42.135259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:24:42.135383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:24:42.136718 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:24:42.142333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:24:42.147448 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 01:24:42.147491 kernel: GPT:17805311 != 20971519 Jul 7 01:24:42.147503 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 01:24:42.149029 kernel: GPT:17805311 != 20971519 Jul 7 01:24:42.149055 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 01:24:42.149082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:24:42.199859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:24:42.206228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:24:42.219184 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:24:42.337148 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (448) Jul 7 01:24:42.351125 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Jul 7 01:24:42.373789 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 01:24:42.390902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 01:24:42.396511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:24:42.401092 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 01:24:42.401719 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 01:24:42.410233 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 01:24:42.427199 disk-uuid[512]: Primary Header is updated. Jul 7 01:24:42.427199 disk-uuid[512]: Secondary Entries is updated. Jul 7 01:24:42.427199 disk-uuid[512]: Secondary Header is updated. Jul 7 01:24:42.436133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:24:42.442343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:24:43.464201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:24:43.465676 disk-uuid[513]: The operation has completed successfully. Jul 7 01:24:43.545215 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 01:24:43.546037 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 01:24:43.567197 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 01:24:43.580804 sh[526]: Success Jul 7 01:24:43.603106 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 7 01:24:43.687926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 01:24:43.709266 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 01:24:43.711298 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 01:24:43.743163 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 01:24:43.743260 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:24:43.743290 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 01:24:43.743320 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 01:24:43.744956 kernel: BTRFS info (device dm-0): using free space tree Jul 7 01:24:43.759095 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 01:24:43.760186 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 01:24:43.766211 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 01:24:43.769191 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 01:24:43.793120 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:24:43.804193 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:24:43.804222 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:24:43.816107 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:24:43.829663 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 01:24:43.835200 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:24:43.847347 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 01:24:43.855290 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 01:24:43.906937 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:24:43.918634 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:24:43.938596 systemd-networkd[709]: lo: Link UP Jul 7 01:24:43.938604 systemd-networkd[709]: lo: Gained carrier Jul 7 01:24:43.939774 systemd-networkd[709]: Enumeration completed Jul 7 01:24:43.941221 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:24:43.941346 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:24:43.941350 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:24:43.942191 systemd-networkd[709]: eth0: Link UP Jul 7 01:24:43.942194 systemd-networkd[709]: eth0: Gained carrier Jul 7 01:24:43.942210 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:24:43.949029 systemd[1]: Reached target network.target - Network. Jul 7 01:24:43.957120 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.191/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 01:24:44.002192 ignition[644]: Ignition 2.19.0 Jul 7 01:24:44.002938 ignition[644]: Stage: fetch-offline Jul 7 01:24:44.002979 ignition[644]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:24:44.002989 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:24:44.003107 ignition[644]: parsed url from cmdline: "" Jul 7 01:24:44.003111 ignition[644]: no config URL provided Jul 7 01:24:44.003117 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:24:44.005728 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:24:44.003125 ignition[644]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:24:44.003130 ignition[644]: failed to fetch config: resource requires networking Jul 7 01:24:44.003319 ignition[644]: Ignition finished successfully Jul 7 01:24:44.019325 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 01:24:44.034909 ignition[719]: Ignition 2.19.0 Jul 7 01:24:44.034926 ignition[719]: Stage: fetch Jul 7 01:24:44.035161 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:24:44.035173 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:24:44.035325 ignition[719]: parsed url from cmdline: "" Jul 7 01:24:44.035330 ignition[719]: no config URL provided Jul 7 01:24:44.035336 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:24:44.035345 ignition[719]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:24:44.035563 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 01:24:44.035577 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 01:24:44.035602 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 01:24:44.378518 ignition[719]: GET result: OK Jul 7 01:24:44.378777 ignition[719]: parsing config with SHA512: ed1f3d895dc603f6e21ba5c15be67794c0256dd246a8e9379463ba1d12c233ea19b862715115fbc21c5e556b668f402df1cda127f93f84d9b53e82eda3b9bdad Jul 7 01:24:44.386814 unknown[719]: fetched base config from "system" Jul 7 01:24:44.386850 unknown[719]: fetched base config from "system" Jul 7 01:24:44.387528 ignition[719]: fetch: fetch complete Jul 7 01:24:44.386865 unknown[719]: fetched user config from "openstack" Jul 7 01:24:44.387549 ignition[719]: fetch: fetch passed Jul 7 01:24:44.391959 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 01:24:44.387641 ignition[719]: Ignition finished successfully Jul 7 01:24:44.401470 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 01:24:44.439392 ignition[725]: Ignition 2.19.0 Jul 7 01:24:44.439425 ignition[725]: Stage: kargs Jul 7 01:24:44.439829 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:24:44.439855 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:24:44.444709 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 01:24:44.441849 ignition[725]: kargs: kargs passed Jul 7 01:24:44.441953 ignition[725]: Ignition finished successfully Jul 7 01:24:44.453474 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 01:24:44.489642 ignition[731]: Ignition 2.19.0 Jul 7 01:24:44.489675 ignition[731]: Stage: disks Jul 7 01:24:44.491888 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:24:44.491931 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:24:44.498313 ignition[731]: disks: disks passed Jul 7 01:24:44.498412 ignition[731]: Ignition finished successfully Jul 7 01:24:44.501722 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 01:24:44.504013 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 01:24:44.506108 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 01:24:44.509262 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:24:44.512383 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:24:44.515005 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:24:44.525328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 01:24:44.568277 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 01:24:44.579590 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 01:24:44.591360 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 01:24:44.767152 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 01:24:44.766706 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 01:24:44.767670 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 01:24:44.775210 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:24:44.779156 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 01:24:44.780564 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 01:24:44.782470 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 01:24:44.783043 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 01:24:44.783090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:24:44.791465 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 01:24:44.794479 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (747) Jul 7 01:24:44.794506 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:24:44.801141 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:24:44.801240 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:24:44.809469 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 01:24:44.826333 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:24:44.819297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:24:44.966384 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 01:24:44.980942 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jul 7 01:24:44.993553 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 01:24:45.008111 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 01:24:45.106431 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 01:24:45.118226 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 01:24:45.122271 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 01:24:45.126916 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 01:24:45.130311 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:24:45.158521 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 01:24:45.160211 ignition[864]: INFO : Ignition 2.19.0 Jul 7 01:24:45.160211 ignition[864]: INFO : Stage: mount Jul 7 01:24:45.160211 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:24:45.160211 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:24:45.164295 ignition[864]: INFO : mount: mount passed Jul 7 01:24:45.164295 ignition[864]: INFO : Ignition finished successfully Jul 7 01:24:45.162670 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 01:24:45.396540 systemd-networkd[709]: eth0: Gained IPv6LL Jul 7 01:24:52.053919 coreos-metadata[749]: Jul 07 01:24:52.053 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:24:52.093637 coreos-metadata[749]: Jul 07 01:24:52.093 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:24:52.107961 coreos-metadata[749]: Jul 07 01:24:52.107 INFO Fetch successful Jul 7 01:24:52.109506 coreos-metadata[749]: Jul 07 01:24:52.108 INFO wrote hostname ci-4081-3-4-a-4422182f44.novalocal to /sysroot/etc/hostname Jul 7 01:24:52.111471 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 01:24:52.111679 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 01:24:52.124389 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 01:24:52.150674 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:24:52.169167 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (881) Jul 7 01:24:52.177753 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:24:52.177815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:24:52.186167 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:24:52.196204 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:24:52.200687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:24:52.248139 ignition[899]: INFO : Ignition 2.19.0 Jul 7 01:24:52.248139 ignition[899]: INFO : Stage: files Jul 7 01:24:52.251011 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:24:52.251011 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:24:52.251011 ignition[899]: DEBUG : files: compiled without relabeling support, skipping Jul 7 01:24:52.256716 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 01:24:52.256716 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 01:24:52.260992 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 01:24:52.260992 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 01:24:52.267856 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 01:24:52.262785 unknown[899]: wrote ssh authorized keys file for user: core Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:24:52.271679 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 01:24:53.062684 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 7 01:24:54.688246 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 01:24:54.691498 ignition[899]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:24:54.691498 ignition[899]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:24:54.691498 ignition[899]: INFO : files: files passed Jul 7 01:24:54.691498 ignition[899]: INFO : Ignition finished successfully Jul 7 01:24:54.691500 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 01:24:54.700629 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 01:24:54.705204 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 01:24:54.706102 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 01:24:54.706202 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 01:24:54.724007 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:24:54.724007 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:24:54.728907 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:24:54.726884 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:24:54.729731 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 01:24:54.741492 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 01:24:54.780728 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 01:24:54.780960 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 01:24:54.783289 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 01:24:54.784928 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 01:24:54.786902 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 01:24:54.792331 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 01:24:54.809472 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:24:54.818388 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 01:24:54.840579 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:24:54.841989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:24:54.844516 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 01:24:54.847050 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 01:24:54.847213 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:24:54.850038 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 01:24:54.851729 systemd[1]: Stopped target basic.target - Basic System. Jul 7 01:24:54.854282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 01:24:54.856597 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:24:54.858886 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 01:24:54.861491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 01:24:54.871354 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:24:54.874025 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 01:24:54.876552 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 01:24:54.879154 systemd[1]: Stopped target swap.target - Swaps. Jul 7 01:24:54.881575 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 01:24:54.881685 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:24:54.884524 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:24:54.886269 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:24:54.888605 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 01:24:54.888681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:24:54.891224 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 01:24:54.891330 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 01:24:54.894923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 01:24:54.895025 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:24:54.896649 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 01:24:54.896754 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 01:24:54.906254 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 01:24:54.913262 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 01:24:54.915947 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 01:24:54.917869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:24:54.921433 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 01:24:54.922986 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:24:54.927119 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 01:24:54.931184 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 01:24:54.938516 ignition[951]: INFO : Ignition 2.19.0 Jul 7 01:24:54.939764 ignition[951]: INFO : Stage: umount Jul 7 01:24:54.939764 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:24:54.939764 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:24:54.943269 ignition[951]: INFO : umount: umount passed Jul 7 01:24:54.943269 ignition[951]: INFO : Ignition finished successfully Jul 7 01:24:54.943960 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 01:24:54.944096 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 01:24:54.947301 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 01:24:54.947382 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 01:24:54.947916 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 01:24:54.947959 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 01:24:54.948533 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 01:24:54.948572 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 01:24:54.949923 systemd[1]: Stopped target network.target - Network. Jul 7 01:24:54.952164 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 01:24:54.952212 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:24:54.952814 systemd[1]: Stopped target paths.target - Path Units. Jul 7 01:24:54.953273 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 01:24:54.953864 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:24:54.954452 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 01:24:54.955502 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 01:24:54.956569 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 01:24:54.956606 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:24:54.957745 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 01:24:54.957780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:24:54.958920 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 01:24:54.958963 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 01:24:54.959955 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 01:24:54.959996 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 01:24:54.961287 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 01:24:54.962546 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 01:24:54.964499 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 01:24:54.964980 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 01:24:54.965097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 01:24:54.965123 systemd-networkd[709]: eth0: DHCPv6 lease lost Jul 7 01:24:54.967553 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 01:24:54.967637 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 01:24:54.970588 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 01:24:54.970639 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:24:54.971685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 01:24:54.971730 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 01:24:54.977232 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 01:24:54.979639 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 01:24:54.979697 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:24:54.980881 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:24:54.982499 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 01:24:54.982586 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 01:24:54.990767 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 01:24:54.990913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:24:54.992661 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 01:24:54.992723 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 01:24:54.993788 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 01:24:54.993823 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:24:54.994900 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 01:24:54.994944 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:24:54.996766 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 01:24:54.996807 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 01:24:54.997957 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:24:54.997999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:24:55.005257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 01:24:55.006079 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 01:24:55.006132 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:24:55.006648 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 01:24:55.006688 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 01:24:55.007208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 01:24:55.007248 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:24:55.007781 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 01:24:55.007820 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:24:55.008398 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 01:24:55.008437 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:24:55.009670 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 01:24:55.009710 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:24:55.014683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:24:55.014725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:24:55.016246 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 01:24:55.016338 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 01:24:55.017289 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 01:24:55.017373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 01:24:55.018844 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 01:24:55.026199 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 01:24:55.035053 systemd[1]: Switching root. Jul 7 01:24:55.068231 systemd-journald[184]: Journal stopped Jul 7 01:24:56.927537 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 7 01:24:56.927598 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 01:24:56.927615 kernel: SELinux: policy capability open_perms=1 Jul 7 01:24:56.927628 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 01:24:56.927645 kernel: SELinux: policy capability always_check_network=0 Jul 7 01:24:56.927656 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 01:24:56.927668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 01:24:56.927679 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 01:24:56.927690 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 01:24:56.927703 kernel: audit: type=1403 audit(1751851495.720:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 01:24:56.927717 systemd[1]: Successfully loaded SELinux policy in 82.326ms. Jul 7 01:24:56.927731 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.806ms. Jul 7 01:24:56.927744 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:24:56.927757 systemd[1]: Detected virtualization kvm. Jul 7 01:24:56.927769 systemd[1]: Detected architecture x86-64. Jul 7 01:24:56.927780 systemd[1]: Detected first boot. Jul 7 01:24:56.927794 systemd[1]: Hostname set to . Jul 7 01:24:56.927807 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:24:56.927819 zram_generator::config[994]: No configuration found. Jul 7 01:24:56.927832 systemd[1]: Populated /etc with preset unit settings. Jul 7 01:24:56.927843 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 01:24:56.927857 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 01:24:56.927869 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 01:24:56.927881 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 01:24:56.927895 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 01:24:56.927907 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 01:24:56.927919 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 01:24:56.927931 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 01:24:56.927942 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 01:24:56.927956 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 01:24:56.927968 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 01:24:56.927980 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:24:56.927992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:24:56.928045 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 01:24:56.928059 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 01:24:56.928582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 01:24:56.928596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:24:56.928608 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 01:24:56.928623 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:24:56.928637 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 01:24:56.928649 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 01:24:56.928661 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 01:24:56.928677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 01:24:56.928689 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:24:56.928702 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:24:56.928714 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:24:56.928726 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:24:56.928738 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 01:24:56.928750 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 01:24:56.928762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:24:56.928774 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:24:56.928785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:24:56.928800 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 01:24:56.928812 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 01:24:56.928826 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 01:24:56.928837 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 01:24:56.928849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:24:56.928861 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 01:24:56.928873 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 01:24:56.928884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 01:24:56.928898 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 01:24:56.928910 systemd[1]: Reached target machines.target - Containers. Jul 7 01:24:56.928924 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 01:24:56.928936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:24:56.928948 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:24:56.928959 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 01:24:56.928971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:24:56.928983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:24:56.928994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:24:56.929009 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 01:24:56.929021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:24:56.929035 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 01:24:56.929047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 01:24:56.929058 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 01:24:56.929088 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 01:24:56.929101 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 01:24:56.929112 kernel: loop: module loaded Jul 7 01:24:56.929123 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:24:56.929135 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:24:56.929146 kernel: fuse: init (API version 7.39) Jul 7 01:24:56.929160 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 01:24:56.929173 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 01:24:56.929185 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:24:56.929213 systemd-journald[1087]: Collecting audit messages is disabled. Jul 7 01:24:56.929236 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 01:24:56.929248 systemd-journald[1087]: Journal started Jul 7 01:24:56.929275 systemd-journald[1087]: Runtime Journal (/run/log/journal/911b8cd2f7224ac2a3c17cdea5f32dad) is 8.0M, max 78.3M, 70.3M free. Jul 7 01:24:56.933487 systemd[1]: Stopped verity-setup.service. Jul 7 01:24:56.540908 systemd[1]: Queued start job for default target multi-user.target. Jul 7 01:24:56.570344 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 01:24:56.570723 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 01:24:56.944084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:24:56.944149 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:24:56.946930 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 01:24:56.947565 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 01:24:56.949221 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 01:24:56.949785 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 01:24:56.950419 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 01:24:56.951533 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 01:24:56.953081 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 01:24:56.954409 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:24:56.955365 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 01:24:56.956274 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 01:24:56.957017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:24:56.958275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:24:56.959050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:24:56.959248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:24:56.960402 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 01:24:56.961105 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 01:24:56.961827 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:24:56.961946 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:24:56.963698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:24:56.965111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 01:24:56.965877 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 01:24:56.982870 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 01:24:56.995225 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 01:24:56.999549 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 01:24:57.000204 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 01:24:57.000234 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:24:57.001866 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 01:24:57.012399 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 01:24:57.015134 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 01:24:57.015768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:24:57.037092 kernel: ACPI: bus type drm_connector registered Jul 7 01:24:57.038743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 01:24:57.044200 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 01:24:57.044826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:24:57.047880 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 01:24:57.049182 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:24:57.050737 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:24:57.054268 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 01:24:57.056285 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:24:57.061091 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:24:57.062149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:24:57.063143 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:24:57.069323 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 01:24:57.070108 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 01:24:57.070840 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 01:24:57.076365 systemd-journald[1087]: Time spent on flushing to /var/log/journal/911b8cd2f7224ac2a3c17cdea5f32dad is 27.384ms for 929 entries. Jul 7 01:24:57.076365 systemd-journald[1087]: System Journal (/var/log/journal/911b8cd2f7224ac2a3c17cdea5f32dad) is 8.0M, max 584.8M, 576.8M free. Jul 7 01:24:57.190608 systemd-journald[1087]: Received client request to flush runtime journal. Jul 7 01:24:57.190664 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 01:24:57.083745 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 01:24:57.105502 udevadm[1133]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 01:24:57.142982 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 01:24:57.143686 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 01:24:57.154325 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 01:24:57.155723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:24:57.186258 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jul 7 01:24:57.186273 systemd-tmpfiles[1127]: ACLs are not supported, ignoring. Jul 7 01:24:57.195556 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 01:24:57.198424 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:24:57.211279 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 01:24:57.220995 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 01:24:57.224356 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 01:24:57.228197 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 01:24:57.248250 kernel: loop1: detected capacity change from 0 to 140768 Jul 7 01:24:57.264928 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 01:24:57.273922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:24:57.299318 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jul 7 01:24:57.299338 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jul 7 01:24:57.303092 kernel: loop2: detected capacity change from 0 to 221472 Jul 7 01:24:57.307090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:24:57.358092 kernel: loop3: detected capacity change from 0 to 8 Jul 7 01:24:57.378110 kernel: loop4: detected capacity change from 0 to 142488 Jul 7 01:24:57.463096 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 01:24:57.498090 kernel: loop6: detected capacity change from 0 to 221472 Jul 7 01:24:57.542084 kernel: loop7: detected capacity change from 0 to 8 Jul 7 01:24:57.544437 (sd-merge)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 01:24:57.544852 (sd-merge)[1155]: Merged extensions into '/usr'. Jul 7 01:24:57.559144 systemd[1]: Reloading requested from client PID 1126 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 01:24:57.559168 systemd[1]: Reloading... Jul 7 01:24:57.681105 zram_generator::config[1178]: No configuration found. Jul 7 01:24:57.921971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:24:58.010639 systemd[1]: Reloading finished in 450 ms. Jul 7 01:24:58.043132 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 01:24:58.043975 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 01:24:58.052225 systemd[1]: Starting ensure-sysext.service... Jul 7 01:24:58.054252 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:24:58.058537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:24:58.074315 ldconfig[1121]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 01:24:58.080190 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Jul 7 01:24:58.080209 systemd[1]: Reloading... Jul 7 01:24:58.096544 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jul 7 01:24:58.109548 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 01:24:58.109894 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 01:24:58.110768 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 01:24:58.111151 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jul 7 01:24:58.111233 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jul 7 01:24:58.120718 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:24:58.120733 systemd-tmpfiles[1238]: Skipping /boot Jul 7 01:24:58.132279 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:24:58.132295 systemd-tmpfiles[1238]: Skipping /boot Jul 7 01:24:58.160577 zram_generator::config[1267]: No configuration found. Jul 7 01:24:58.269152 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1277) Jul 7 01:24:58.333088 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 7 01:24:58.342381 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 7 01:24:58.369088 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 01:24:58.386365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:24:58.397184 kernel: ACPI: button: Power Button [PWRF] Jul 7 01:24:58.427544 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 01:24:58.469125 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 7 01:24:58.471309 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 7 01:24:58.474271 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 01:24:58.474704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:24:58.476727 kernel: Console: switching to colour dummy device 80x25 Jul 7 01:24:58.476764 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 01:24:58.476783 kernel: [drm] features: -context_init Jul 7 01:24:58.475409 systemd[1]: Reloading finished in 394 ms. Jul 7 01:24:58.478590 kernel: [drm] number of scanouts: 1 Jul 7 01:24:58.478633 kernel: [drm] number of cap sets: 0 Jul 7 01:24:58.481086 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 7 01:24:58.486861 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 7 01:24:58.487050 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 01:24:58.492528 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:24:58.495081 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 7 01:24:58.497294 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 01:24:58.505473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:24:58.535639 systemd[1]: Finished ensure-sysext.service. Jul 7 01:24:58.545934 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 01:24:58.553887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:24:58.558200 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:24:58.564234 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 01:24:58.564468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:24:58.567217 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 01:24:58.570265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:24:58.572303 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:24:58.578210 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:24:58.580235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:24:58.581530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:24:58.593272 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 01:24:58.597434 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 01:24:58.605954 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:24:58.609240 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:24:58.614999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:24:58.618658 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 01:24:58.623515 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 01:24:58.633239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:24:58.633365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:24:58.640223 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 01:24:58.657415 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 01:24:58.660280 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:24:58.668695 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 01:24:58.685030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:24:58.685263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:24:58.687483 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:24:58.687606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:24:58.691247 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 01:24:58.700490 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 01:24:58.712331 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:24:58.705795 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:24:58.705960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:24:58.713657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:24:58.713811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:24:58.717012 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 01:24:58.721937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:24:58.722040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:24:58.732271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 01:24:58.754658 augenrules[1401]: No rules Jul 7 01:24:58.756462 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:24:58.760182 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 01:24:58.766151 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 01:24:58.768642 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 01:24:58.801966 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 01:24:58.802805 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 01:24:58.820422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:24:58.864583 systemd-networkd[1374]: lo: Link UP Jul 7 01:24:58.864596 systemd-networkd[1374]: lo: Gained carrier Jul 7 01:24:58.867365 systemd-networkd[1374]: Enumeration completed Jul 7 01:24:58.867473 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:24:58.871682 systemd-resolved[1375]: Positive Trust Anchors: Jul 7 01:24:58.872035 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:24:58.872163 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:24:58.875267 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:24:58.875282 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:24:58.876960 systemd-networkd[1374]: eth0: Link UP Jul 7 01:24:58.876970 systemd-networkd[1374]: eth0: Gained carrier Jul 7 01:24:58.876984 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:24:58.878775 systemd-resolved[1375]: Using system hostname 'ci-4081-3-4-a-4422182f44.novalocal'. Jul 7 01:24:58.882275 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 01:24:58.882991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:24:58.887140 systemd[1]: Reached target network.target - Network. Jul 7 01:24:58.888650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:24:58.898412 systemd-networkd[1374]: eth0: DHCPv4 address 172.24.4.191/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 01:24:58.909741 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 01:24:58.911317 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:24:58.911992 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 01:24:58.914353 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 01:24:58.914963 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 01:24:58.918296 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 01:24:58.918343 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:24:58.918857 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 01:24:58.919546 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 01:24:58.922201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 01:24:58.922817 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:24:58.927007 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 01:24:58.931126 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 01:24:58.938222 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 01:24:58.940513 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 01:24:58.941835 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:24:58.944095 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:24:58.945733 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:24:58.945766 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:24:58.954226 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 01:24:58.959399 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 01:24:58.967238 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 01:24:58.975196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 01:24:58.982327 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 01:24:58.984428 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 01:24:58.991753 jq[1429]: false Jul 7 01:24:58.992280 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 01:24:58.999336 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 01:24:59.013338 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 01:24:59.022304 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 01:24:59.028464 extend-filesystems[1430]: Found loop4 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found loop5 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found loop6 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found loop7 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda1 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda2 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda3 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found usr Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda4 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda6 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda7 Jul 7 01:24:59.028464 extend-filesystems[1430]: Found vda9 Jul 7 01:24:59.028464 extend-filesystems[1430]: Checking size of /dev/vda9 Jul 7 01:24:59.744550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1277) Jul 7 01:24:59.744586 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 7 01:24:59.744604 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 7 01:24:59.028038 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 01:24:59.754078 extend-filesystems[1430]: Resized partition /dev/vda9 Jul 7 01:24:59.600561 dbus-daemon[1426]: [system] SELinux support is enabled Jul 7 01:24:59.032262 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 01:24:59.759882 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Jul 7 01:24:59.759882 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 01:24:59.759882 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 01:24:59.759882 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 7 01:24:59.033792 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 01:24:59.777643 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Jul 7 01:24:59.595994 systemd-timesyncd[1376]: Contacted time server 67.217.246.127:123 (0.flatcar.pool.ntp.org). Jul 7 01:24:59.792515 update_engine[1443]: I20250707 01:24:59.646152 1443 main.cc:92] Flatcar Update Engine starting Jul 7 01:24:59.792515 update_engine[1443]: I20250707 01:24:59.649650 1443 update_check_scheduler.cc:74] Next update check in 8m29s Jul 7 01:24:59.596069 systemd-timesyncd[1376]: Initial clock synchronization to Mon 2025-07-07 01:24:59.595857 UTC. Jul 7 01:24:59.795634 jq[1445]: true Jul 7 01:24:59.599834 systemd-resolved[1375]: Clock change detected. Flushing caches. Jul 7 01:24:59.613947 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 01:24:59.796610 jq[1456]: true Jul 7 01:24:59.617908 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 01:24:59.633662 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 01:24:59.634190 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 01:24:59.634460 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 01:24:59.636550 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 01:24:59.640574 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 01:24:59.640855 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 01:24:59.658124 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 01:24:59.658156 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 01:24:59.669865 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 01:24:59.669892 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 01:24:59.674640 systemd[1]: Started update-engine.service - Update Engine. Jul 7 01:24:59.693175 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 01:24:59.698013 systemd-logind[1441]: New seat seat0. Jul 7 01:24:59.730163 systemd-logind[1441]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 01:24:59.730300 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 01:24:59.734255 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 01:24:59.734447 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 01:24:59.745954 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 01:24:59.746015 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 01:24:59.889527 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:24:59.890126 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 01:24:59.898733 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 01:24:59.905632 systemd[1]: Starting sshkeys.service... Jul 7 01:24:59.934768 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 01:24:59.945303 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 01:25:00.107503 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 01:25:00.130775 containerd[1460]: time="2025-07-07T01:25:00.129524887Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 01:25:00.136555 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 01:25:00.145172 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 01:25:00.159183 containerd[1460]: time="2025-07-07T01:25:00.159131334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:25:00.162107 containerd[1460]: time="2025-07-07T01:25:00.162062301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:25:00.162107 containerd[1460]: time="2025-07-07T01:25:00.162098499Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 01:25:00.162183 containerd[1460]: time="2025-07-07T01:25:00.162119979Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 01:25:00.162324 containerd[1460]: time="2025-07-07T01:25:00.162289918Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 01:25:00.162324 containerd[1460]: time="2025-07-07T01:25:00.162319313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162390767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162414511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162596362Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162615488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162633372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162646106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162732919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.162985943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.163081182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.163097152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 01:25:00.163799 containerd[1460]: time="2025-07-07T01:25:00.163174767Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 01:25:00.165593 containerd[1460]: time="2025-07-07T01:25:00.163222627Z" level=info msg="metadata content store policy set" policy=shared Jul 7 01:25:00.166377 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 01:25:00.166682 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 01:25:00.176140 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 01:25:00.181235 containerd[1460]: time="2025-07-07T01:25:00.181151893Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 01:25:00.181235 containerd[1460]: time="2025-07-07T01:25:00.181227886Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 01:25:00.181393 containerd[1460]: time="2025-07-07T01:25:00.181250077Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 01:25:00.181393 containerd[1460]: time="2025-07-07T01:25:00.181276186Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 01:25:00.181393 containerd[1460]: time="2025-07-07T01:25:00.181300893Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 01:25:00.181530 containerd[1460]: time="2025-07-07T01:25:00.181459771Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 01:25:00.181827 containerd[1460]: time="2025-07-07T01:25:00.181798326Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 01:25:00.181940 containerd[1460]: time="2025-07-07T01:25:00.181915355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 01:25:00.181982 containerd[1460]: time="2025-07-07T01:25:00.181942967Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 01:25:00.181982 containerd[1460]: time="2025-07-07T01:25:00.181966832Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 01:25:00.182029 containerd[1460]: time="2025-07-07T01:25:00.181983433Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182029 containerd[1460]: time="2025-07-07T01:25:00.182000495Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182029 containerd[1460]: time="2025-07-07T01:25:00.182014411Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182093 containerd[1460]: time="2025-07-07T01:25:00.182030631Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182093 containerd[1460]: time="2025-07-07T01:25:00.182048124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182093 containerd[1460]: time="2025-07-07T01:25:00.182084603Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182151 containerd[1460]: time="2025-07-07T01:25:00.182101254Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182151 containerd[1460]: time="2025-07-07T01:25:00.182121772Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 01:25:00.182151 containerd[1460]: time="2025-07-07T01:25:00.182146719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182213 containerd[1460]: time="2025-07-07T01:25:00.182171155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182213 containerd[1460]: time="2025-07-07T01:25:00.182186674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182213 containerd[1460]: time="2025-07-07T01:25:00.182204257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182280 containerd[1460]: time="2025-07-07T01:25:00.182218995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182280 containerd[1460]: time="2025-07-07T01:25:00.182236758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182280 containerd[1460]: time="2025-07-07T01:25:00.182253049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182280 containerd[1460]: time="2025-07-07T01:25:00.182268578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182357 containerd[1460]: time="2025-07-07T01:25:00.182282945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182357 containerd[1460]: time="2025-07-07T01:25:00.182300998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182357 containerd[1460]: time="2025-07-07T01:25:00.182318862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182357 containerd[1460]: time="2025-07-07T01:25:00.182334451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182357 containerd[1460]: time="2025-07-07T01:25:00.182349109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182455 containerd[1460]: time="2025-07-07T01:25:00.182370469Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 01:25:00.182455 containerd[1460]: time="2025-07-07T01:25:00.182394644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182455 containerd[1460]: time="2025-07-07T01:25:00.182408270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182455 containerd[1460]: time="2025-07-07T01:25:00.182421424Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182475356Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182500062Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182512826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182527083Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182538574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182553181Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182569813Z" level=info msg="NRI interface is disabled by configuration." Jul 7 01:25:00.182665 containerd[1460]: time="2025-07-07T01:25:00.182584330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 01:25:00.187075 containerd[1460]: time="2025-07-07T01:25:00.186978020Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 01:25:00.187075 containerd[1460]: time="2025-07-07T01:25:00.187076265Z" level=info msg="Connect containerd service" Jul 7 01:25:00.187254 containerd[1460]: time="2025-07-07T01:25:00.187130266Z" level=info msg="using legacy CRI server" Jul 7 01:25:00.187254 containerd[1460]: time="2025-07-07T01:25:00.187139744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 01:25:00.187303 containerd[1460]: time="2025-07-07T01:25:00.187257805Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 01:25:00.187956 containerd[1460]: time="2025-07-07T01:25:00.187913034Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:25:00.193271 containerd[1460]: time="2025-07-07T01:25:00.192860022Z" level=info msg="Start subscribing containerd event" Jul 7 01:25:00.193271 containerd[1460]: time="2025-07-07T01:25:00.192952956Z" level=info msg="Start recovering state" Jul 7 01:25:00.193271 containerd[1460]: time="2025-07-07T01:25:00.193051130Z" level=info msg="Start event monitor" Jul 7 01:25:00.193271 containerd[1460]: time="2025-07-07T01:25:00.193069795Z" level=info msg="Start snapshots syncer" Jul 7 01:25:00.193271 containerd[1460]: time="2025-07-07T01:25:00.193083701Z" level=info msg="Start cni network conf syncer for default" Jul 7 01:25:00.193271 containerd[1460]: time="2025-07-07T01:25:00.193098359Z" level=info msg="Start streaming server" Jul 7 01:25:00.193034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 01:25:00.194426 containerd[1460]: time="2025-07-07T01:25:00.194375805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 01:25:00.194723 containerd[1460]: time="2025-07-07T01:25:00.194705784Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 01:25:00.195194 containerd[1460]: time="2025-07-07T01:25:00.194915497Z" level=info msg="containerd successfully booted in 0.066518s" Jul 7 01:25:00.199443 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 01:25:00.210117 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 01:25:00.213718 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 01:25:00.216539 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 01:25:00.342545 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 01:25:00.355493 systemd[1]: Started sshd@0-172.24.4.191:22-172.24.4.1:33480.service - OpenSSH per-connection server daemon (172.24.4.1:33480). Jul 7 01:25:01.313012 systemd-networkd[1374]: eth0: Gained IPv6LL Jul 7 01:25:01.318661 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 01:25:01.323328 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 01:25:01.337322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:25:01.358973 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 01:25:01.459890 sshd[1517]: Accepted publickey for core from 172.24.4.1 port 33480 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:01.489413 sshd[1517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:01.516642 systemd-logind[1441]: New session 1 of user core. Jul 7 01:25:01.520226 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 01:25:01.530738 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 01:25:01.556518 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 01:25:01.587836 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 01:25:01.606492 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 01:25:01.630791 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 01:25:02.105755 systemd[1534]: Queued start job for default target default.target. Jul 7 01:25:02.112640 systemd[1534]: Created slice app.slice - User Application Slice. Jul 7 01:25:02.112771 systemd[1534]: Reached target paths.target - Paths. Jul 7 01:25:02.112860 systemd[1534]: Reached target timers.target - Timers. Jul 7 01:25:02.115870 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 01:25:02.125635 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 01:25:02.125693 systemd[1534]: Reached target sockets.target - Sockets. Jul 7 01:25:02.125708 systemd[1534]: Reached target basic.target - Basic System. Jul 7 01:25:02.126028 systemd[1534]: Reached target default.target - Main User Target. Jul 7 01:25:02.126116 systemd[1534]: Startup finished in 481ms. Jul 7 01:25:02.126828 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 01:25:02.138975 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 01:25:02.637811 systemd[1]: Started sshd@1-172.24.4.191:22-172.24.4.1:33482.service - OpenSSH per-connection server daemon (172.24.4.1:33482). Jul 7 01:25:04.281104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:25:04.284193 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:25:04.344521 sshd[1546]: Accepted publickey for core from 172.24.4.1 port 33482 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:04.348167 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:04.360824 systemd-logind[1441]: New session 2 of user core. Jul 7 01:25:04.367164 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 01:25:05.107028 sshd[1546]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:05.120005 systemd[1]: sshd@1-172.24.4.191:22-172.24.4.1:33482.service: Deactivated successfully. Jul 7 01:25:05.123191 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 01:25:05.126646 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jul 7 01:25:05.136830 systemd[1]: Started sshd@2-172.24.4.191:22-172.24.4.1:57162.service - OpenSSH per-connection server daemon (172.24.4.1:57162). Jul 7 01:25:05.143614 systemd-logind[1441]: Removed session 2. Jul 7 01:25:05.265576 login[1514]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:25:05.272489 systemd-logind[1441]: New session 3 of user core. Jul 7 01:25:05.278721 login[1515]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:25:05.279929 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 01:25:05.288446 systemd-logind[1441]: New session 4 of user core. Jul 7 01:25:05.293468 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 01:25:05.495816 kubelet[1554]: E0707 01:25:05.494878 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:25:05.498277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:25:05.498512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:25:05.499100 systemd[1]: kubelet.service: Consumed 2.097s CPU time. Jul 7 01:25:06.422160 sshd[1564]: Accepted publickey for core from 172.24.4.1 port 57162 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:06.424863 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:06.434603 systemd-logind[1441]: New session 5 of user core. Jul 7 01:25:06.444183 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 01:25:06.610541 coreos-metadata[1425]: Jul 07 01:25:06.610 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:25:06.657098 coreos-metadata[1425]: Jul 07 01:25:06.657 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 01:25:06.928977 coreos-metadata[1425]: Jul 07 01:25:06.928 INFO Fetch successful Jul 7 01:25:06.929221 coreos-metadata[1425]: Jul 07 01:25:06.929 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:25:06.939624 coreos-metadata[1425]: Jul 07 01:25:06.939 INFO Fetch successful Jul 7 01:25:06.939624 coreos-metadata[1425]: Jul 07 01:25:06.939 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 01:25:06.947865 coreos-metadata[1425]: Jul 07 01:25:06.947 INFO Fetch successful Jul 7 01:25:06.947865 coreos-metadata[1425]: Jul 07 01:25:06.947 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 01:25:06.957441 coreos-metadata[1425]: Jul 07 01:25:06.957 INFO Fetch successful Jul 7 01:25:06.957441 coreos-metadata[1425]: Jul 07 01:25:06.957 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 01:25:06.967277 coreos-metadata[1425]: Jul 07 01:25:06.967 INFO Fetch successful Jul 7 01:25:06.967277 coreos-metadata[1425]: Jul 07 01:25:06.967 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 01:25:06.980549 coreos-metadata[1425]: Jul 07 01:25:06.980 INFO Fetch successful Jul 7 01:25:07.022404 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 01:25:07.023929 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 01:25:07.050153 coreos-metadata[1489]: Jul 07 01:25:07.050 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:25:07.065814 sshd[1564]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:07.070985 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jul 7 01:25:07.072485 systemd[1]: sshd@2-172.24.4.191:22-172.24.4.1:57162.service: Deactivated successfully. Jul 7 01:25:07.077716 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 01:25:07.083123 systemd-logind[1441]: Removed session 5. Jul 7 01:25:07.093069 coreos-metadata[1489]: Jul 07 01:25:07.092 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 01:25:07.110248 coreos-metadata[1489]: Jul 07 01:25:07.110 INFO Fetch successful Jul 7 01:25:07.110248 coreos-metadata[1489]: Jul 07 01:25:07.110 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 01:25:07.127075 coreos-metadata[1489]: Jul 07 01:25:07.126 INFO Fetch successful Jul 7 01:25:07.137889 unknown[1489]: wrote ssh authorized keys file for user: core Jul 7 01:25:07.170545 update-ssh-keys[1608]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:25:07.172274 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 01:25:07.174162 systemd[1]: Finished sshkeys.service. Jul 7 01:25:07.175056 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 01:25:07.175478 systemd[1]: Startup finished in 1.207s (kernel) + 14.895s (initrd) + 10.979s (userspace) = 27.081s. Jul 7 01:25:15.754460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 01:25:15.772349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:25:16.259157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:25:16.260491 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:25:16.344299 kubelet[1620]: E0707 01:25:16.344244 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:25:16.355248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:25:16.356023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:25:17.088430 systemd[1]: Started sshd@3-172.24.4.191:22-172.24.4.1:36470.service - OpenSSH per-connection server daemon (172.24.4.1:36470). Jul 7 01:25:18.256544 sshd[1628]: Accepted publickey for core from 172.24.4.1 port 36470 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:18.262936 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:18.283174 systemd-logind[1441]: New session 6 of user core. Jul 7 01:25:18.292550 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 01:25:19.001255 sshd[1628]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:19.016132 systemd[1]: sshd@3-172.24.4.191:22-172.24.4.1:36470.service: Deactivated successfully. Jul 7 01:25:19.020264 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 01:25:19.024248 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jul 7 01:25:19.033423 systemd[1]: Started sshd@4-172.24.4.191:22-172.24.4.1:36476.service - OpenSSH per-connection server daemon (172.24.4.1:36476). Jul 7 01:25:19.036933 systemd-logind[1441]: Removed session 6. Jul 7 01:25:20.189327 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 36476 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:20.193515 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:20.206322 systemd-logind[1441]: New session 7 of user core. Jul 7 01:25:20.215112 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 01:25:20.832708 sshd[1635]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:20.842501 systemd[1]: sshd@4-172.24.4.191:22-172.24.4.1:36476.service: Deactivated successfully. Jul 7 01:25:20.845521 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 01:25:20.847325 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jul 7 01:25:20.856388 systemd[1]: Started sshd@5-172.24.4.191:22-172.24.4.1:36490.service - OpenSSH per-connection server daemon (172.24.4.1:36490). Jul 7 01:25:20.859075 systemd-logind[1441]: Removed session 7. Jul 7 01:25:22.040591 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 36490 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:22.045096 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:22.063104 systemd-logind[1441]: New session 8 of user core. Jul 7 01:25:22.073423 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 01:25:22.782661 sshd[1642]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:22.797886 systemd[1]: sshd@5-172.24.4.191:22-172.24.4.1:36490.service: Deactivated successfully. Jul 7 01:25:22.803161 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 01:25:22.810205 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jul 7 01:25:22.819464 systemd[1]: Started sshd@6-172.24.4.191:22-172.24.4.1:36494.service - OpenSSH per-connection server daemon (172.24.4.1:36494). Jul 7 01:25:22.822811 systemd-logind[1441]: Removed session 8. Jul 7 01:25:23.965149 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 36494 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:23.968167 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:23.980397 systemd-logind[1441]: New session 9 of user core. Jul 7 01:25:23.988094 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 01:25:24.473531 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 01:25:24.474357 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:25:24.505871 sudo[1652]: pam_unix(sudo:session): session closed for user root Jul 7 01:25:24.712142 sshd[1649]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:24.724681 systemd[1]: sshd@6-172.24.4.191:22-172.24.4.1:36494.service: Deactivated successfully. Jul 7 01:25:24.730194 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 01:25:24.734176 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jul 7 01:25:24.742361 systemd[1]: Started sshd@7-172.24.4.191:22-172.24.4.1:56480.service - OpenSSH per-connection server daemon (172.24.4.1:56480). Jul 7 01:25:24.745867 systemd-logind[1441]: Removed session 9. Jul 7 01:25:25.880379 sshd[1657]: Accepted publickey for core from 172.24.4.1 port 56480 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:25.883662 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:25.893871 systemd-logind[1441]: New session 10 of user core. Jul 7 01:25:25.903077 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 01:25:26.347395 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 01:25:26.348606 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:25:26.359216 sudo[1661]: pam_unix(sudo:session): session closed for user root Jul 7 01:25:26.372701 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 01:25:26.374263 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:25:26.376465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 01:25:26.387294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:25:26.407256 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 01:25:26.422057 auditctl[1665]: No rules Jul 7 01:25:26.422218 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 01:25:26.422788 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 01:25:26.430701 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:25:26.502724 augenrules[1685]: No rules Jul 7 01:25:26.504686 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:25:26.506055 sudo[1660]: pam_unix(sudo:session): session closed for user root Jul 7 01:25:26.664131 sshd[1657]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:26.679923 systemd[1]: sshd@7-172.24.4.191:22-172.24.4.1:56480.service: Deactivated successfully. Jul 7 01:25:26.682330 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 01:25:26.683370 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jul 7 01:25:26.696862 systemd[1]: Started sshd@8-172.24.4.191:22-172.24.4.1:56496.service - OpenSSH per-connection server daemon (172.24.4.1:56496). Jul 7 01:25:26.700123 systemd-logind[1441]: Removed session 10. Jul 7 01:25:26.809410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:25:26.828524 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:25:26.895292 kubelet[1699]: E0707 01:25:26.895162 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:25:26.900618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:25:26.901304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:25:27.709575 sshd[1693]: Accepted publickey for core from 172.24.4.1 port 56496 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:25:27.712715 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:25:27.724521 systemd-logind[1441]: New session 11 of user core. Jul 7 01:25:27.733085 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 01:25:28.187922 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 01:25:28.188670 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:25:29.578540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:25:29.615918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:25:29.672059 systemd[1]: Reloading requested from client PID 1741 ('systemctl') (unit session-11.scope)... Jul 7 01:25:29.672179 systemd[1]: Reloading... Jul 7 01:25:29.843160 zram_generator::config[1785]: No configuration found. Jul 7 01:25:29.987717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:25:30.078328 systemd[1]: Reloading finished in 405 ms. Jul 7 01:25:30.123040 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 01:25:30.123121 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 01:25:30.123835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:25:30.129067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:25:30.242161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:25:30.260008 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:25:30.334274 kubelet[1843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:25:30.334274 kubelet[1843]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 01:25:30.334274 kubelet[1843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:25:30.335179 kubelet[1843]: I0707 01:25:30.334823 1843 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:25:31.600803 kubelet[1843]: I0707 01:25:31.599531 1843 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 01:25:31.600803 kubelet[1843]: I0707 01:25:31.599633 1843 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:25:31.600803 kubelet[1843]: I0707 01:25:31.600625 1843 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 01:25:31.650015 kubelet[1843]: I0707 01:25:31.649948 1843 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:25:31.674025 kubelet[1843]: E0707 01:25:31.673925 1843 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 01:25:31.674025 kubelet[1843]: I0707 01:25:31.673969 1843 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 01:25:31.680386 kubelet[1843]: I0707 01:25:31.680309 1843 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:25:31.682371 kubelet[1843]: I0707 01:25:31.680699 1843 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 01:25:31.682371 kubelet[1843]: I0707 01:25:31.680999 1843 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:25:31.682371 kubelet[1843]: I0707 01:25:31.681054 1843 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 01:25:31.682371 kubelet[1843]: I0707 01:25:31.681541 1843 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:25:31.683554 kubelet[1843]: I0707 01:25:31.681555 1843 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 01:25:31.683554 kubelet[1843]: I0707 01:25:31.682003 1843 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:25:31.687411 kubelet[1843]: I0707 01:25:31.686266 1843 kubelet.go:408] "Attempting to sync node with API server" Jul 7 01:25:31.687411 kubelet[1843]: I0707 01:25:31.686322 1843 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:25:31.687411 kubelet[1843]: I0707 01:25:31.686473 1843 kubelet.go:314] "Adding apiserver pod source" Jul 7 01:25:31.687411 kubelet[1843]: I0707 01:25:31.686605 1843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:25:31.693451 kubelet[1843]: E0707 01:25:31.692662 1843 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:31.693451 kubelet[1843]: E0707 01:25:31.692898 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:31.695373 kubelet[1843]: I0707 01:25:31.695322 1843 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 01:25:31.696282 kubelet[1843]: I0707 01:25:31.696252 1843 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 01:25:31.696489 kubelet[1843]: W0707 01:25:31.696454 1843 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 01:25:31.700820 kubelet[1843]: I0707 01:25:31.699763 1843 server.go:1274] "Started kubelet" Jul 7 01:25:31.705088 kubelet[1843]: I0707 01:25:31.703997 1843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:25:31.715299 kubelet[1843]: I0707 01:25:31.715098 1843 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:25:31.723492 kubelet[1843]: I0707 01:25:31.723454 1843 server.go:449] "Adding debug handlers to kubelet server" Jul 7 01:25:31.724841 kubelet[1843]: I0707 01:25:31.716827 1843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:25:31.725255 kubelet[1843]: I0707 01:25:31.715798 1843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:25:31.725255 kubelet[1843]: I0707 01:25:31.725248 1843 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:25:31.725428 kubelet[1843]: I0707 01:25:31.719331 1843 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 01:25:31.725524 kubelet[1843]: W0707 01:25:31.718679 1843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.191" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 7 01:25:31.726132 kubelet[1843]: E0707 01:25:31.725958 1843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.191\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 7 01:25:31.726132 kubelet[1843]: W0707 01:25:31.719035 1843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 7 01:25:31.726132 kubelet[1843]: E0707 01:25:31.726000 1843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 7 01:25:31.727798 kubelet[1843]: I0707 01:25:31.719178 1843 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 01:25:31.727798 kubelet[1843]: E0707 01:25:31.719769 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:31.727798 kubelet[1843]: I0707 01:25:31.727677 1843 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:25:31.729174 kubelet[1843]: W0707 01:25:31.729146 1843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 7 01:25:31.729323 kubelet[1843]: E0707 01:25:31.729304 1843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jul 7 01:25:31.731600 kubelet[1843]: I0707 01:25:31.731560 1843 factory.go:221] Registration of the systemd container factory successfully Jul 7 01:25:31.733182 kubelet[1843]: I0707 01:25:31.732276 1843 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:25:31.736137 kubelet[1843]: E0707 01:25:31.736079 1843 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:25:31.740882 kubelet[1843]: I0707 01:25:31.740851 1843 factory.go:221] Registration of the containerd container factory successfully Jul 7 01:25:31.783736 kubelet[1843]: E0707 01:25:31.768996 1843 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.191.184fd3afb62efcf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.191,UID:172.24.4.191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.191,},FirstTimestamp:2025-07-07 01:25:31.69968255 +0000 UTC m=+1.430731808,LastTimestamp:2025-07-07 01:25:31.69968255 +0000 UTC m=+1.430731808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.191,}" Jul 7 01:25:31.787293 kubelet[1843]: E0707 01:25:31.786344 1843 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.191\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 7 01:25:31.788501 kubelet[1843]: I0707 01:25:31.788471 1843 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 01:25:31.788501 kubelet[1843]: I0707 01:25:31.788496 1843 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 01:25:31.788930 kubelet[1843]: I0707 01:25:31.788557 1843 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:25:31.796262 kubelet[1843]: I0707 01:25:31.795880 1843 policy_none.go:49] "None policy: Start" Jul 7 01:25:31.797792 kubelet[1843]: I0707 01:25:31.797456 1843 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 01:25:31.797792 kubelet[1843]: I0707 01:25:31.797526 1843 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:25:31.809514 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 01:25:31.822670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 01:25:31.828525 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 01:25:31.829008 kubelet[1843]: E0707 01:25:31.828791 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:31.836820 kubelet[1843]: I0707 01:25:31.835808 1843 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 01:25:31.836820 kubelet[1843]: I0707 01:25:31.836095 1843 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:25:31.836820 kubelet[1843]: I0707 01:25:31.836146 1843 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:25:31.840061 kubelet[1843]: I0707 01:25:31.840042 1843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:25:31.843378 kubelet[1843]: E0707 01:25:31.843333 1843 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.191\" not found" Jul 7 01:25:31.847199 kubelet[1843]: I0707 01:25:31.847051 1843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 01:25:31.848167 kubelet[1843]: I0707 01:25:31.848138 1843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 01:25:31.848986 kubelet[1843]: I0707 01:25:31.848401 1843 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 01:25:31.848986 kubelet[1843]: I0707 01:25:31.848504 1843 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 01:25:31.848986 kubelet[1843]: E0707 01:25:31.848656 1843 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 7 01:25:31.940798 kubelet[1843]: I0707 01:25:31.939357 1843 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.191" Jul 7 01:25:31.952738 kubelet[1843]: I0707 01:25:31.952676 1843 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.191" Jul 7 01:25:31.952970 kubelet[1843]: E0707 01:25:31.952788 1843 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.191\": node \"172.24.4.191\" not found" Jul 7 01:25:32.003711 kubelet[1843]: E0707 01:25:32.003653 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.105015 kubelet[1843]: E0707 01:25:32.104902 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.206090 kubelet[1843]: E0707 01:25:32.205845 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.306233 kubelet[1843]: E0707 01:25:32.306171 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.308259 sudo[1708]: pam_unix(sudo:session): session closed for user root Jul 7 01:25:32.406904 kubelet[1843]: E0707 01:25:32.406807 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.507441 kubelet[1843]: E0707 01:25:32.507307 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.546224 sshd[1693]: pam_unix(sshd:session): session closed for user core Jul 7 01:25:32.559438 systemd[1]: sshd@8-172.24.4.191:22-172.24.4.1:56496.service: Deactivated successfully. Jul 7 01:25:32.564963 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 01:25:32.565953 systemd[1]: session-11.scope: Consumed 1.238s CPU time, 73.0M memory peak, 0B memory swap peak. Jul 7 01:25:32.567723 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jul 7 01:25:32.571926 systemd-logind[1441]: Removed session 11. Jul 7 01:25:32.608286 kubelet[1843]: I0707 01:25:32.608147 1843 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 7 01:25:32.609180 kubelet[1843]: E0707 01:25:32.608413 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.609180 kubelet[1843]: W0707 01:25:32.608565 1843 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 7 01:25:32.609180 kubelet[1843]: W0707 01:25:32.608659 1843 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 7 01:25:32.693448 kubelet[1843]: E0707 01:25:32.693340 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:32.708987 kubelet[1843]: E0707 01:25:32.708915 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.810101 kubelet[1843]: E0707 01:25:32.809884 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:32.911323 kubelet[1843]: E0707 01:25:32.911103 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:33.011434 kubelet[1843]: E0707 01:25:33.011351 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:33.112500 kubelet[1843]: E0707 01:25:33.112269 1843 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.191\" not found" Jul 7 01:25:33.214740 kubelet[1843]: I0707 01:25:33.214681 1843 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 7 01:25:33.216277 containerd[1460]: time="2025-07-07T01:25:33.215884498Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 01:25:33.218608 kubelet[1843]: I0707 01:25:33.216696 1843 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 7 01:25:33.694567 kubelet[1843]: I0707 01:25:33.694389 1843 apiserver.go:52] "Watching apiserver" Jul 7 01:25:33.694567 kubelet[1843]: E0707 01:25:33.694485 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:33.722244 kubelet[1843]: E0707 01:25:33.722154 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:33.730885 kubelet[1843]: I0707 01:25:33.730739 1843 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 01:25:33.741942 kubelet[1843]: I0707 01:25:33.741281 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-cni-net-dir\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.741942 kubelet[1843]: I0707 01:25:33.741388 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr26g\" (UniqueName: \"kubernetes.io/projected/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-kube-api-access-zr26g\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.741942 kubelet[1843]: I0707 01:25:33.741442 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsprw\" (UniqueName: \"kubernetes.io/projected/16fd9dd4-d481-48c8-9a01-a6217cc775d6-kube-api-access-vsprw\") pod \"kube-proxy-sblqw\" (UID: \"16fd9dd4-d481-48c8-9a01-a6217cc775d6\") " pod="kube-system/kube-proxy-sblqw" Jul 7 01:25:33.741942 kubelet[1843]: I0707 01:25:33.741485 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-lib-modules\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.741942 kubelet[1843]: I0707 01:25:33.741531 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-node-certs\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.742921 kubelet[1843]: I0707 01:25:33.741575 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-var-lib-calico\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.742921 kubelet[1843]: I0707 01:25:33.741658 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-xtables-lock\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.749583 kubelet[1843]: I0707 01:25:33.741701 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlk7q\" (UniqueName: \"kubernetes.io/projected/2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f-kube-api-access-jlk7q\") pod \"csi-node-driver-jjttr\" (UID: \"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f\") " pod="calico-system/csi-node-driver-jjttr" Jul 7 01:25:33.749583 kubelet[1843]: I0707 01:25:33.746691 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/16fd9dd4-d481-48c8-9a01-a6217cc775d6-kube-proxy\") pod \"kube-proxy-sblqw\" (UID: \"16fd9dd4-d481-48c8-9a01-a6217cc775d6\") " pod="kube-system/kube-proxy-sblqw" Jul 7 01:25:33.749583 kubelet[1843]: I0707 01:25:33.746796 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16fd9dd4-d481-48c8-9a01-a6217cc775d6-xtables-lock\") pod \"kube-proxy-sblqw\" (UID: \"16fd9dd4-d481-48c8-9a01-a6217cc775d6\") " pod="kube-system/kube-proxy-sblqw" Jul 7 01:25:33.749583 kubelet[1843]: I0707 01:25:33.746844 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-cni-bin-dir\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.749583 kubelet[1843]: I0707 01:25:33.746887 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-cni-log-dir\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.750136 kubelet[1843]: I0707 01:25:33.746928 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-flexvol-driver-host\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.750136 kubelet[1843]: I0707 01:25:33.746970 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-policysync\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.750136 kubelet[1843]: I0707 01:25:33.747037 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f-socket-dir\") pod \"csi-node-driver-jjttr\" (UID: \"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f\") " pod="calico-system/csi-node-driver-jjttr" Jul 7 01:25:33.750136 kubelet[1843]: I0707 01:25:33.747079 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f-varrun\") pod \"csi-node-driver-jjttr\" (UID: \"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f\") " pod="calico-system/csi-node-driver-jjttr" Jul 7 01:25:33.750136 kubelet[1843]: I0707 01:25:33.747122 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16fd9dd4-d481-48c8-9a01-a6217cc775d6-lib-modules\") pod \"kube-proxy-sblqw\" (UID: \"16fd9dd4-d481-48c8-9a01-a6217cc775d6\") " pod="kube-system/kube-proxy-sblqw" Jul 7 01:25:33.750534 kubelet[1843]: I0707 01:25:33.747168 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-tigera-ca-bundle\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.750534 kubelet[1843]: I0707 01:25:33.747212 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0-var-run-calico\") pod \"calico-node-l44z2\" (UID: \"b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0\") " pod="calico-system/calico-node-l44z2" Jul 7 01:25:33.750534 kubelet[1843]: I0707 01:25:33.747256 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f-kubelet-dir\") pod \"csi-node-driver-jjttr\" (UID: \"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f\") " pod="calico-system/csi-node-driver-jjttr" Jul 7 01:25:33.750534 kubelet[1843]: I0707 01:25:33.747301 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f-registration-dir\") pod \"csi-node-driver-jjttr\" (UID: \"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f\") " pod="calico-system/csi-node-driver-jjttr" Jul 7 01:25:33.751136 systemd[1]: Created slice kubepods-besteffort-pod16fd9dd4_d481_48c8_9a01_a6217cc775d6.slice - libcontainer container kubepods-besteffort-pod16fd9dd4_d481_48c8_9a01_a6217cc775d6.slice. Jul 7 01:25:33.787937 systemd[1]: Created slice kubepods-besteffort-podb698905c_f59b_4a47_bc3d_bfc3f2c1c8f0.slice - libcontainer container kubepods-besteffort-podb698905c_f59b_4a47_bc3d_bfc3f2c1c8f0.slice. Jul 7 01:25:33.864804 kubelet[1843]: E0707 01:25:33.863181 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.864804 kubelet[1843]: W0707 01:25:33.863282 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.864804 kubelet[1843]: E0707 01:25:33.863451 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.866153 kubelet[1843]: E0707 01:25:33.866119 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.867087 kubelet[1843]: W0707 01:25:33.866989 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.867219 kubelet[1843]: E0707 01:25:33.867090 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.869290 kubelet[1843]: E0707 01:25:33.868454 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.870940 kubelet[1843]: W0707 01:25:33.869546 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.870940 kubelet[1843]: E0707 01:25:33.869603 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.870940 kubelet[1843]: E0707 01:25:33.870053 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.870940 kubelet[1843]: W0707 01:25:33.870089 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.870940 kubelet[1843]: E0707 01:25:33.870123 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.870940 kubelet[1843]: E0707 01:25:33.870497 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.870940 kubelet[1843]: W0707 01:25:33.870520 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.870940 kubelet[1843]: E0707 01:25:33.870543 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.871530 kubelet[1843]: E0707 01:25:33.870975 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.871530 kubelet[1843]: W0707 01:25:33.871000 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.871530 kubelet[1843]: E0707 01:25:33.871024 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.871878 kubelet[1843]: E0707 01:25:33.871563 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.871878 kubelet[1843]: W0707 01:25:33.871588 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.871878 kubelet[1843]: E0707 01:25:33.871612 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.872163 kubelet[1843]: E0707 01:25:33.872111 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.872163 kubelet[1843]: W0707 01:25:33.872136 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.872343 kubelet[1843]: E0707 01:25:33.872161 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.872576 kubelet[1843]: E0707 01:25:33.872495 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.872576 kubelet[1843]: W0707 01:25:33.872536 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.872576 kubelet[1843]: E0707 01:25:33.872559 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.873164 kubelet[1843]: E0707 01:25:33.873089 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.873164 kubelet[1843]: W0707 01:25:33.873125 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.873164 kubelet[1843]: E0707 01:25:33.873152 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.873596 kubelet[1843]: E0707 01:25:33.873540 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.873596 kubelet[1843]: W0707 01:25:33.873564 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.873596 kubelet[1843]: E0707 01:25:33.873588 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.874367 kubelet[1843]: E0707 01:25:33.874029 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.874367 kubelet[1843]: W0707 01:25:33.874065 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.874367 kubelet[1843]: E0707 01:25:33.874094 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.875410 kubelet[1843]: E0707 01:25:33.875208 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.875410 kubelet[1843]: W0707 01:25:33.875244 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.875410 kubelet[1843]: E0707 01:25:33.875318 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.876536 kubelet[1843]: E0707 01:25:33.876223 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.876536 kubelet[1843]: W0707 01:25:33.876254 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.876536 kubelet[1843]: E0707 01:25:33.876321 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.877694 kubelet[1843]: E0707 01:25:33.877423 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.877694 kubelet[1843]: W0707 01:25:33.877501 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.877694 kubelet[1843]: E0707 01:25:33.877531 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.879216 kubelet[1843]: E0707 01:25:33.878634 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.879216 kubelet[1843]: W0707 01:25:33.878677 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.879216 kubelet[1843]: E0707 01:25:33.878708 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.881260 kubelet[1843]: E0707 01:25:33.881211 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.881260 kubelet[1843]: W0707 01:25:33.881260 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.882104 kubelet[1843]: E0707 01:25:33.881722 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.882628 kubelet[1843]: E0707 01:25:33.882589 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.882628 kubelet[1843]: W0707 01:25:33.882624 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.883028 kubelet[1843]: E0707 01:25:33.882717 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.883181 kubelet[1843]: E0707 01:25:33.883132 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.883181 kubelet[1843]: W0707 01:25:33.883170 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.883360 kubelet[1843]: E0707 01:25:33.883313 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.883769 kubelet[1843]: E0707 01:25:33.883697 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.883769 kubelet[1843]: W0707 01:25:33.883729 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.884100 kubelet[1843]: E0707 01:25:33.884021 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.884299 kubelet[1843]: E0707 01:25:33.884272 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.884299 kubelet[1843]: W0707 01:25:33.884296 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.884590 kubelet[1843]: E0707 01:25:33.884404 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.884832 kubelet[1843]: E0707 01:25:33.884740 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.884832 kubelet[1843]: W0707 01:25:33.884821 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.885165 kubelet[1843]: E0707 01:25:33.884914 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.885368 kubelet[1843]: E0707 01:25:33.885239 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.885368 kubelet[1843]: W0707 01:25:33.885264 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.885697 kubelet[1843]: E0707 01:25:33.885405 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.885955 kubelet[1843]: E0707 01:25:33.885841 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.885955 kubelet[1843]: W0707 01:25:33.885867 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.886310 kubelet[1843]: E0707 01:25:33.886124 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.886442 kubelet[1843]: E0707 01:25:33.886398 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.886442 kubelet[1843]: W0707 01:25:33.886436 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.886837 kubelet[1843]: E0707 01:25:33.886529 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.886938 kubelet[1843]: E0707 01:25:33.886889 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.886938 kubelet[1843]: W0707 01:25:33.886914 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.887269 kubelet[1843]: E0707 01:25:33.887019 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.887495 kubelet[1843]: E0707 01:25:33.887322 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.887495 kubelet[1843]: W0707 01:25:33.887359 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.887495 kubelet[1843]: E0707 01:25:33.887411 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.887982 kubelet[1843]: E0707 01:25:33.887704 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.887982 kubelet[1843]: W0707 01:25:33.887728 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.888147 kubelet[1843]: E0707 01:25:33.888020 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.888377 kubelet[1843]: E0707 01:25:33.888321 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.888377 kubelet[1843]: W0707 01:25:33.888354 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.888736 kubelet[1843]: E0707 01:25:33.888579 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.889002 kubelet[1843]: E0707 01:25:33.888966 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.889002 kubelet[1843]: W0707 01:25:33.888991 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.889309 kubelet[1843]: E0707 01:25:33.889080 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.889438 kubelet[1843]: E0707 01:25:33.889395 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.889438 kubelet[1843]: W0707 01:25:33.889432 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.890043 kubelet[1843]: E0707 01:25:33.889527 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.890043 kubelet[1843]: E0707 01:25:33.889938 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.890043 kubelet[1843]: W0707 01:25:33.889963 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.890582 kubelet[1843]: E0707 01:25:33.890348 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.890582 kubelet[1843]: W0707 01:25:33.890390 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.890839 kubelet[1843]: E0707 01:25:33.890726 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.890839 kubelet[1843]: W0707 01:25:33.890817 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.892176 kubelet[1843]: E0707 01:25:33.891150 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.892176 kubelet[1843]: W0707 01:25:33.891186 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.892176 kubelet[1843]: E0707 01:25:33.891507 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.892176 kubelet[1843]: W0707 01:25:33.891527 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.892176 kubelet[1843]: E0707 01:25:33.891959 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.892176 kubelet[1843]: W0707 01:25:33.891982 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.893306 kubelet[1843]: E0707 01:25:33.893190 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.893306 kubelet[1843]: E0707 01:25:33.893287 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.893495 kubelet[1843]: E0707 01:25:33.893310 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.893495 kubelet[1843]: E0707 01:25:33.893377 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.893495 kubelet[1843]: E0707 01:25:33.893397 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.893495 kubelet[1843]: E0707 01:25:33.893415 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.893883 kubelet[1843]: E0707 01:25:33.893598 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.893883 kubelet[1843]: W0707 01:25:33.893621 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.897886 kubelet[1843]: E0707 01:25:33.896961 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.897886 kubelet[1843]: W0707 01:25:33.896999 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.898405 kubelet[1843]: E0707 01:25:33.898363 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.898652 kubelet[1843]: E0707 01:25:33.898618 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.900109 kubelet[1843]: E0707 01:25:33.900040 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.900109 kubelet[1843]: W0707 01:25:33.900076 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.902041 kubelet[1843]: E0707 01:25:33.902003 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.902306 kubelet[1843]: E0707 01:25:33.902131 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.902618 kubelet[1843]: W0707 01:25:33.902297 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.902906 kubelet[1843]: E0707 01:25:33.902866 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.904293 kubelet[1843]: E0707 01:25:33.904240 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.904911 kubelet[1843]: W0707 01:25:33.904849 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.905189 kubelet[1843]: E0707 01:25:33.905131 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.906107 kubelet[1843]: E0707 01:25:33.905852 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.906107 kubelet[1843]: W0707 01:25:33.906041 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.907386 kubelet[1843]: E0707 01:25:33.906354 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.908894 kubelet[1843]: E0707 01:25:33.907380 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.908894 kubelet[1843]: W0707 01:25:33.908863 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.909247 kubelet[1843]: E0707 01:25:33.909159 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.909617 kubelet[1843]: E0707 01:25:33.909540 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.909617 kubelet[1843]: W0707 01:25:33.909589 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.909869 kubelet[1843]: E0707 01:25:33.909795 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.910136 kubelet[1843]: E0707 01:25:33.910077 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.910136 kubelet[1843]: W0707 01:25:33.910124 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.914810 kubelet[1843]: E0707 01:25:33.912051 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.914810 kubelet[1843]: W0707 01:25:33.912082 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.914810 kubelet[1843]: E0707 01:25:33.913952 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.914810 kubelet[1843]: E0707 01:25:33.913977 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.916865 kubelet[1843]: E0707 01:25:33.916816 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.916865 kubelet[1843]: W0707 01:25:33.916834 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.918625 kubelet[1843]: E0707 01:25:33.918587 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.920895 kubelet[1843]: E0707 01:25:33.920825 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.920895 kubelet[1843]: W0707 01:25:33.920845 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.923776 kubelet[1843]: E0707 01:25:33.923721 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.924381 kubelet[1843]: E0707 01:25:33.924351 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.924811 kubelet[1843]: W0707 01:25:33.924375 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.925305 kubelet[1843]: E0707 01:25:33.925275 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.931317 kubelet[1843]: E0707 01:25:33.931276 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.931317 kubelet[1843]: W0707 01:25:33.931299 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.931605 kubelet[1843]: E0707 01:25:33.931573 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.931605 kubelet[1843]: W0707 01:25:33.931603 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.931684 kubelet[1843]: E0707 01:25:33.931673 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.931710 kubelet[1843]: E0707 01:25:33.931701 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.933183 kubelet[1843]: E0707 01:25:33.933162 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.933183 kubelet[1843]: W0707 01:25:33.933178 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.933302 kubelet[1843]: E0707 01:25:33.933282 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.933547 kubelet[1843]: E0707 01:25:33.933513 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.933593 kubelet[1843]: W0707 01:25:33.933551 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.933593 kubelet[1843]: E0707 01:25:33.933564 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:33.939559 kubelet[1843]: E0707 01:25:33.939532 1843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:25:33.939559 kubelet[1843]: W0707 01:25:33.939550 1843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:25:33.939688 kubelet[1843]: E0707 01:25:33.939584 1843 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:25:34.078929 containerd[1460]: time="2025-07-07T01:25:34.078675635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sblqw,Uid:16fd9dd4-d481-48c8-9a01-a6217cc775d6,Namespace:kube-system,Attempt:0,}" Jul 7 01:25:34.098196 containerd[1460]: time="2025-07-07T01:25:34.097486222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l44z2,Uid:b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0,Namespace:calico-system,Attempt:0,}" Jul 7 01:25:34.695330 kubelet[1843]: E0707 01:25:34.695120 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:34.890827 containerd[1460]: time="2025-07-07T01:25:34.890057592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:25:34.895015 containerd[1460]: time="2025-07-07T01:25:34.894854468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 7 01:25:34.899036 containerd[1460]: time="2025-07-07T01:25:34.898936483Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:25:34.902023 containerd[1460]: time="2025-07-07T01:25:34.901921408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:25:34.903147 containerd[1460]: time="2025-07-07T01:25:34.903070265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 01:25:34.906977 containerd[1460]: time="2025-07-07T01:25:34.906895642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:25:34.914953 containerd[1460]: time="2025-07-07T01:25:34.912565520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 814.738659ms" Jul 7 01:25:34.918311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143494487.mount: Deactivated successfully. Jul 7 01:25:34.920087 containerd[1460]: time="2025-07-07T01:25:34.919680552Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 837.737905ms" Jul 7 01:25:35.189617 containerd[1460]: time="2025-07-07T01:25:35.189145941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:25:35.189617 containerd[1460]: time="2025-07-07T01:25:35.189479416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:25:35.189617 containerd[1460]: time="2025-07-07T01:25:35.189512609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:25:35.190542 containerd[1460]: time="2025-07-07T01:25:35.190330244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:25:35.207944 containerd[1460]: time="2025-07-07T01:25:35.207618260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:25:35.207944 containerd[1460]: time="2025-07-07T01:25:35.207698382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:25:35.207944 containerd[1460]: time="2025-07-07T01:25:35.207728129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:25:35.207944 containerd[1460]: time="2025-07-07T01:25:35.207845542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:25:35.346996 systemd[1]: Started cri-containerd-6ce4fd5430162307346b96e8d5e5162b4e64850ba80e2c9f84786028a78bcd51.scope - libcontainer container 6ce4fd5430162307346b96e8d5e5162b4e64850ba80e2c9f84786028a78bcd51. Jul 7 01:25:35.370636 systemd[1]: Started cri-containerd-36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41.scope - libcontainer container 36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41. Jul 7 01:25:35.444730 containerd[1460]: time="2025-07-07T01:25:35.444461947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l44z2,Uid:b698905c-f59b-4a47-bc3d-bfc3f2c1c8f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41\"" Jul 7 01:25:35.451575 containerd[1460]: time="2025-07-07T01:25:35.451417462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sblqw,Uid:16fd9dd4-d481-48c8-9a01-a6217cc775d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ce4fd5430162307346b96e8d5e5162b4e64850ba80e2c9f84786028a78bcd51\"" Jul 7 01:25:35.452523 containerd[1460]: time="2025-07-07T01:25:35.452136621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 01:25:35.696569 kubelet[1843]: E0707 01:25:35.696361 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:35.850442 kubelet[1843]: E0707 01:25:35.849583 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:36.697805 kubelet[1843]: E0707 01:25:36.697621 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:37.611714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652445724.mount: Deactivated successfully. Jul 7 01:25:37.698236 kubelet[1843]: E0707 01:25:37.698192 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:37.747415 containerd[1460]: time="2025-07-07T01:25:37.747333867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:37.748785 containerd[1460]: time="2025-07-07T01:25:37.748578450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 7 01:25:37.749847 containerd[1460]: time="2025-07-07T01:25:37.749790192Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:37.752090 containerd[1460]: time="2025-07-07T01:25:37.752059612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:37.753148 containerd[1460]: time="2025-07-07T01:25:37.752938291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.300766084s" Jul 7 01:25:37.753148 containerd[1460]: time="2025-07-07T01:25:37.752986042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 01:25:37.756307 containerd[1460]: time="2025-07-07T01:25:37.755386652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 01:25:37.756865 containerd[1460]: time="2025-07-07T01:25:37.756809004Z" level=info msg="CreateContainer within sandbox \"36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 01:25:37.782925 containerd[1460]: time="2025-07-07T01:25:37.782610604Z" level=info msg="CreateContainer within sandbox \"36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592\"" Jul 7 01:25:37.790581 containerd[1460]: time="2025-07-07T01:25:37.790521721Z" level=info msg="StartContainer for \"896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592\"" Jul 7 01:25:37.847077 systemd[1]: Started cri-containerd-896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592.scope - libcontainer container 896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592. Jul 7 01:25:37.851147 kubelet[1843]: E0707 01:25:37.851053 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:37.885593 containerd[1460]: time="2025-07-07T01:25:37.884477741Z" level=info msg="StartContainer for \"896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592\" returns successfully" Jul 7 01:25:37.909972 systemd[1]: cri-containerd-896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592.scope: Deactivated successfully. Jul 7 01:25:38.045427 containerd[1460]: time="2025-07-07T01:25:38.044857075Z" level=info msg="shim disconnected" id=896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592 namespace=k8s.io Jul 7 01:25:38.045427 containerd[1460]: time="2025-07-07T01:25:38.045211928Z" level=warning msg="cleaning up after shim disconnected" id=896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592 namespace=k8s.io Jul 7 01:25:38.045427 containerd[1460]: time="2025-07-07T01:25:38.045254820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:25:38.512504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-896e8e35dbaab769114f2db98896a32ba0b7b47465ca38ac60076612fba81592-rootfs.mount: Deactivated successfully. Jul 7 01:25:38.701769 kubelet[1843]: E0707 01:25:38.699741 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:39.290430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360583921.mount: Deactivated successfully. Jul 7 01:25:39.700788 kubelet[1843]: E0707 01:25:39.700646 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:39.852545 kubelet[1843]: E0707 01:25:39.851572 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:39.890140 containerd[1460]: time="2025-07-07T01:25:39.889946178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:39.891788 containerd[1460]: time="2025-07-07T01:25:39.891671030Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 7 01:25:39.892817 containerd[1460]: time="2025-07-07T01:25:39.892635489Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:39.901877 containerd[1460]: time="2025-07-07T01:25:39.901821090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:39.904978 containerd[1460]: time="2025-07-07T01:25:39.904845577Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.149305573s" Jul 7 01:25:39.904978 containerd[1460]: time="2025-07-07T01:25:39.904934525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 01:25:39.909830 containerd[1460]: time="2025-07-07T01:25:39.908765692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 01:25:39.911983 containerd[1460]: time="2025-07-07T01:25:39.911907690Z" level=info msg="CreateContainer within sandbox \"6ce4fd5430162307346b96e8d5e5162b4e64850ba80e2c9f84786028a78bcd51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 01:25:39.940262 containerd[1460]: time="2025-07-07T01:25:39.940207283Z" level=info msg="CreateContainer within sandbox \"6ce4fd5430162307346b96e8d5e5162b4e64850ba80e2c9f84786028a78bcd51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92ed5834e6ad59d48971dc66ad492268d69d6d24fa12ff2504444fdc3e81d869\"" Jul 7 01:25:39.941196 containerd[1460]: time="2025-07-07T01:25:39.941145413Z" level=info msg="StartContainer for \"92ed5834e6ad59d48971dc66ad492268d69d6d24fa12ff2504444fdc3e81d869\"" Jul 7 01:25:40.001043 systemd[1]: Started cri-containerd-92ed5834e6ad59d48971dc66ad492268d69d6d24fa12ff2504444fdc3e81d869.scope - libcontainer container 92ed5834e6ad59d48971dc66ad492268d69d6d24fa12ff2504444fdc3e81d869. Jul 7 01:25:40.042653 containerd[1460]: time="2025-07-07T01:25:40.042571902Z" level=info msg="StartContainer for \"92ed5834e6ad59d48971dc66ad492268d69d6d24fa12ff2504444fdc3e81d869\" returns successfully" Jul 7 01:25:40.701204 kubelet[1843]: E0707 01:25:40.701126 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:40.947046 kubelet[1843]: I0707 01:25:40.946724 1843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sblqw" podStartSLOduration=4.491564738 podStartE2EDuration="8.946540509s" podCreationTimestamp="2025-07-07 01:25:32 +0000 UTC" firstStartedPulling="2025-07-07 01:25:35.452996947 +0000 UTC m=+5.184046195" lastFinishedPulling="2025-07-07 01:25:39.907972717 +0000 UTC m=+9.639021966" observedRunningTime="2025-07-07 01:25:40.943812379 +0000 UTC m=+10.674861727" watchObservedRunningTime="2025-07-07 01:25:40.946540509 +0000 UTC m=+10.677589797" Jul 7 01:25:41.702463 kubelet[1843]: E0707 01:25:41.702365 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:41.866574 kubelet[1843]: E0707 01:25:41.866455 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:42.703494 kubelet[1843]: E0707 01:25:42.703404 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:43.704455 kubelet[1843]: E0707 01:25:43.704413 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:43.851219 kubelet[1843]: E0707 01:25:43.850180 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:44.705732 kubelet[1843]: E0707 01:25:44.705696 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:44.833295 containerd[1460]: time="2025-07-07T01:25:44.833227890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:44.835541 containerd[1460]: time="2025-07-07T01:25:44.835497982Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:44.835740 containerd[1460]: time="2025-07-07T01:25:44.835663535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 01:25:44.838140 containerd[1460]: time="2025-07-07T01:25:44.838111534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:44.838956 containerd[1460]: time="2025-07-07T01:25:44.838897109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.930087855s" Jul 7 01:25:44.838956 containerd[1460]: time="2025-07-07T01:25:44.838954097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 01:25:44.842261 containerd[1460]: time="2025-07-07T01:25:44.842228198Z" level=info msg="CreateContainer within sandbox \"36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 01:25:44.864500 containerd[1460]: time="2025-07-07T01:25:44.864234086Z" level=info msg="CreateContainer within sandbox \"36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e\"" Jul 7 01:25:44.866313 containerd[1460]: time="2025-07-07T01:25:44.865658169Z" level=info msg="StartContainer for \"7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e\"" Jul 7 01:25:44.920958 systemd[1]: Started cri-containerd-7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e.scope - libcontainer container 7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e. Jul 7 01:25:44.959588 containerd[1460]: time="2025-07-07T01:25:44.958941276Z" level=info msg="StartContainer for \"7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e\" returns successfully" Jul 7 01:25:45.095408 update_engine[1443]: I20250707 01:25:45.094931 1443 update_attempter.cc:509] Updating boot flags... Jul 7 01:25:45.442166 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2326) Jul 7 01:25:45.513109 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2327) Jul 7 01:25:45.707038 kubelet[1843]: E0707 01:25:45.706522 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:45.851471 kubelet[1843]: E0707 01:25:45.850574 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:46.618537 containerd[1460]: time="2025-07-07T01:25:46.618414889Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:25:46.624832 systemd[1]: cri-containerd-7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e.scope: Deactivated successfully. Jul 7 01:25:46.625304 systemd[1]: cri-containerd-7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e.scope: Consumed 1.107s CPU time. Jul 7 01:25:46.651825 kubelet[1843]: I0707 01:25:46.651064 1843 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 01:25:46.698047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e-rootfs.mount: Deactivated successfully. Jul 7 01:25:46.707675 kubelet[1843]: E0707 01:25:46.707616 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:47.708128 kubelet[1843]: E0707 01:25:47.708029 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:47.723332 containerd[1460]: time="2025-07-07T01:25:47.723129477Z" level=info msg="shim disconnected" id=7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e namespace=k8s.io Jul 7 01:25:47.723332 containerd[1460]: time="2025-07-07T01:25:47.723294739Z" level=warning msg="cleaning up after shim disconnected" id=7d178a00c117042b6a4867f9bc06c9c0894229facf45d6398d13f955e0326f7e namespace=k8s.io Jul 7 01:25:47.723332 containerd[1460]: time="2025-07-07T01:25:47.723327832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:25:47.866108 systemd[1]: Created slice kubepods-besteffort-pod2d5bb8b5_db3a_42bb_aee7_fa6cf1eac43f.slice - libcontainer container kubepods-besteffort-pod2d5bb8b5_db3a_42bb_aee7_fa6cf1eac43f.slice. Jul 7 01:25:47.874215 containerd[1460]: time="2025-07-07T01:25:47.874123764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjttr,Uid:2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f,Namespace:calico-system,Attempt:0,}" Jul 7 01:25:47.959830 containerd[1460]: time="2025-07-07T01:25:47.958225116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 01:25:48.018369 containerd[1460]: time="2025-07-07T01:25:48.018258052Z" level=error msg="Failed to destroy network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:48.023601 containerd[1460]: time="2025-07-07T01:25:48.019427108Z" level=error msg="encountered an error cleaning up failed sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:48.023601 containerd[1460]: time="2025-07-07T01:25:48.019615694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjttr,Uid:2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:48.024322 kubelet[1843]: E0707 01:25:48.024216 1843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:48.024638 kubelet[1843]: E0707 01:25:48.024559 1843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjttr" Jul 7 01:25:48.024981 kubelet[1843]: E0707 01:25:48.024886 1843 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjttr" Jul 7 01:25:48.025225 kubelet[1843]: E0707 01:25:48.025067 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jjttr_calico-system(2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jjttr_calico-system(2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:48.025735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d-shm.mount: Deactivated successfully. Jul 7 01:25:48.708710 kubelet[1843]: E0707 01:25:48.708578 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:48.963960 kubelet[1843]: I0707 01:25:48.960711 1843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:25:48.964275 containerd[1460]: time="2025-07-07T01:25:48.962712219Z" level=info msg="StopPodSandbox for \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\"" Jul 7 01:25:48.964275 containerd[1460]: time="2025-07-07T01:25:48.963480850Z" level=info msg="Ensure that sandbox 75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d in task-service has been cleanup successfully" Jul 7 01:25:49.029568 containerd[1460]: time="2025-07-07T01:25:49.029430816Z" level=error msg="StopPodSandbox for \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\" failed" error="failed to destroy network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:49.030583 kubelet[1843]: E0707 01:25:49.030477 1843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:25:49.030991 kubelet[1843]: E0707 01:25:49.030638 1843 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d"} Jul 7 01:25:49.031181 kubelet[1843]: E0707 01:25:49.030999 1843 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:25:49.031181 kubelet[1843]: E0707 01:25:49.031066 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jjttr" podUID="2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f" Jul 7 01:25:49.709661 kubelet[1843]: E0707 01:25:49.709562 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:50.718819 kubelet[1843]: E0707 01:25:50.712330 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:51.328090 systemd[1]: Created slice kubepods-besteffort-pod239f2938_d466_40c8_9675_beca1c58522d.slice - libcontainer container kubepods-besteffort-pod239f2938_d466_40c8_9675_beca1c58522d.slice. Jul 7 01:25:51.413857 kubelet[1843]: I0707 01:25:51.413417 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7sz2\" (UniqueName: \"kubernetes.io/projected/239f2938-d466-40c8-9675-beca1c58522d-kube-api-access-r7sz2\") pod \"nginx-deployment-8587fbcb89-bkfw9\" (UID: \"239f2938-d466-40c8-9675-beca1c58522d\") " pod="default/nginx-deployment-8587fbcb89-bkfw9" Jul 7 01:25:51.650285 containerd[1460]: time="2025-07-07T01:25:51.649231681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-bkfw9,Uid:239f2938-d466-40c8-9675-beca1c58522d,Namespace:default,Attempt:0,}" Jul 7 01:25:51.689156 kubelet[1843]: E0707 01:25:51.688986 1843 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:51.718835 kubelet[1843]: E0707 01:25:51.718769 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:51.781494 containerd[1460]: time="2025-07-07T01:25:51.781415607Z" level=error msg="Failed to destroy network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:51.782025 containerd[1460]: time="2025-07-07T01:25:51.781990230Z" level=error msg="encountered an error cleaning up failed sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:51.785776 containerd[1460]: time="2025-07-07T01:25:51.782161433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-bkfw9,Uid:239f2938-d466-40c8-9675-beca1c58522d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:51.785041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8-shm.mount: Deactivated successfully. Jul 7 01:25:51.786299 kubelet[1843]: E0707 01:25:51.782581 1843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:51.786299 kubelet[1843]: E0707 01:25:51.782790 1843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-bkfw9" Jul 7 01:25:51.786299 kubelet[1843]: E0707 01:25:51.782846 1843 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-bkfw9" Jul 7 01:25:51.786620 kubelet[1843]: E0707 01:25:51.782991 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-bkfw9_default(239f2938-d466-40c8-9675-beca1c58522d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-bkfw9_default(239f2938-d466-40c8-9675-beca1c58522d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-bkfw9" podUID="239f2938-d466-40c8-9675-beca1c58522d" Jul 7 01:25:52.015634 kubelet[1843]: I0707 01:25:52.014502 1843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:25:52.016396 containerd[1460]: time="2025-07-07T01:25:52.016330224Z" level=info msg="StopPodSandbox for \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\"" Jul 7 01:25:52.018008 containerd[1460]: time="2025-07-07T01:25:52.017294180Z" level=info msg="Ensure that sandbox 03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8 in task-service has been cleanup successfully" Jul 7 01:25:52.092224 containerd[1460]: time="2025-07-07T01:25:52.092169422Z" level=error msg="StopPodSandbox for \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\" failed" error="failed to destroy network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:25:52.092859 kubelet[1843]: E0707 01:25:52.092617 1843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:25:52.092859 kubelet[1843]: E0707 01:25:52.092703 1843 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8"} Jul 7 01:25:52.092859 kubelet[1843]: E0707 01:25:52.092780 1843 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"239f2938-d466-40c8-9675-beca1c58522d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:25:52.092859 kubelet[1843]: E0707 01:25:52.092819 1843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"239f2938-d466-40c8-9675-beca1c58522d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-bkfw9" podUID="239f2938-d466-40c8-9675-beca1c58522d" Jul 7 01:25:52.719249 kubelet[1843]: E0707 01:25:52.719123 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:53.719612 kubelet[1843]: E0707 01:25:53.719453 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:54.722267 kubelet[1843]: E0707 01:25:54.721990 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:55.724652 kubelet[1843]: E0707 01:25:55.724060 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:56.725711 kubelet[1843]: E0707 01:25:56.724585 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:57.693630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159315513.mount: Deactivated successfully. Jul 7 01:25:57.725582 kubelet[1843]: E0707 01:25:57.725419 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:57.752633 containerd[1460]: time="2025-07-07T01:25:57.752114937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:57.756119 containerd[1460]: time="2025-07-07T01:25:57.752787732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 01:25:57.762820 containerd[1460]: time="2025-07-07T01:25:57.762296249Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:57.771108 containerd[1460]: time="2025-07-07T01:25:57.770998668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:25:57.772795 containerd[1460]: time="2025-07-07T01:25:57.771924460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 9.81361273s" Jul 7 01:25:57.772795 containerd[1460]: time="2025-07-07T01:25:57.772024648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 01:25:57.822388 containerd[1460]: time="2025-07-07T01:25:57.822336470Z" level=info msg="CreateContainer within sandbox \"36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 01:25:57.851563 containerd[1460]: time="2025-07-07T01:25:57.851499157Z" level=info msg="CreateContainer within sandbox \"36982457634514508fc6eb291e633ebb62728dcecfcb0960d8df0f0b8c350c41\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"36cdb1a22d58c43f88147073bc040e4488195b4b787114bf7a4ec4e278e8bf8d\"" Jul 7 01:25:57.852812 containerd[1460]: time="2025-07-07T01:25:57.852637911Z" level=info msg="StartContainer for \"36cdb1a22d58c43f88147073bc040e4488195b4b787114bf7a4ec4e278e8bf8d\"" Jul 7 01:25:57.915923 systemd[1]: Started cri-containerd-36cdb1a22d58c43f88147073bc040e4488195b4b787114bf7a4ec4e278e8bf8d.scope - libcontainer container 36cdb1a22d58c43f88147073bc040e4488195b4b787114bf7a4ec4e278e8bf8d. Jul 7 01:25:57.976032 containerd[1460]: time="2025-07-07T01:25:57.975825322Z" level=info msg="StartContainer for \"36cdb1a22d58c43f88147073bc040e4488195b4b787114bf7a4ec4e278e8bf8d\" returns successfully" Jul 7 01:25:58.090041 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 01:25:58.090310 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 01:25:58.093023 kubelet[1843]: I0707 01:25:58.092384 1843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l44z2" podStartSLOduration=3.765235333 podStartE2EDuration="26.092318005s" podCreationTimestamp="2025-07-07 01:25:32 +0000 UTC" firstStartedPulling="2025-07-07 01:25:35.451018393 +0000 UTC m=+5.182067631" lastFinishedPulling="2025-07-07 01:25:57.778101005 +0000 UTC m=+27.509150303" observedRunningTime="2025-07-07 01:25:58.081796076 +0000 UTC m=+27.812845325" watchObservedRunningTime="2025-07-07 01:25:58.092318005 +0000 UTC m=+27.823367263" Jul 7 01:25:58.726354 kubelet[1843]: E0707 01:25:58.726237 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:59.726960 kubelet[1843]: E0707 01:25:59.726899 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:25:59.855351 containerd[1460]: time="2025-07-07T01:25:59.855047585Z" level=info msg="StopPodSandbox for \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\"" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.020 [INFO][2604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.020 [INFO][2604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" iface="eth0" netns="/var/run/netns/cni-da0baf2d-e024-4323-94dd-cb035f46a6da" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.020 [INFO][2604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" iface="eth0" netns="/var/run/netns/cni-da0baf2d-e024-4323-94dd-cb035f46a6da" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.021 [INFO][2604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" iface="eth0" netns="/var/run/netns/cni-da0baf2d-e024-4323-94dd-cb035f46a6da" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.021 [INFO][2604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.021 [INFO][2604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.107 [INFO][2611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.107 [INFO][2611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.107 [INFO][2611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.122 [WARNING][2611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.122 [INFO][2611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.127 [INFO][2611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:00.134886 containerd[1460]: 2025-07-07 01:26:00.129 [INFO][2604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:00.134886 containerd[1460]: time="2025-07-07T01:26:00.133164589Z" level=info msg="TearDown network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\" successfully" Jul 7 01:26:00.134886 containerd[1460]: time="2025-07-07T01:26:00.133739530Z" level=info msg="StopPodSandbox for \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\" returns successfully" Jul 7 01:26:00.140176 systemd[1]: run-netns-cni\x2dda0baf2d\x2de024\x2d4323\x2d94dd\x2dcb035f46a6da.mount: Deactivated successfully. Jul 7 01:26:00.141938 containerd[1460]: time="2025-07-07T01:26:00.141911642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjttr,Uid:2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f,Namespace:calico-system,Attempt:1,}" Jul 7 01:26:00.243831 kernel: bpftool[2651]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 01:26:00.728166 kubelet[1843]: E0707 01:26:00.728022 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:00.844060 systemd-networkd[1374]: vxlan.calico: Link UP Jul 7 01:26:00.844072 systemd-networkd[1374]: vxlan.calico: Gained carrier Jul 7 01:26:01.667201 systemd-networkd[1374]: cali151a42b30e3: Link UP Jul 7 01:26:01.667698 systemd-networkd[1374]: cali151a42b30e3: Gained carrier Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.554 [INFO][2719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.191-k8s-csi--node--driver--jjttr-eth0 csi-node-driver- calico-system 2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f 1345 0 2025-07-07 01:25:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.24.4.191 csi-node-driver-jjttr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali151a42b30e3 [] [] }} ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.554 [INFO][2719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.598 [INFO][2730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" HandleID="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.598 [INFO][2730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" HandleID="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332500), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.191", "pod":"csi-node-driver-jjttr", "timestamp":"2025-07-07 01:26:01.598527514 +0000 UTC"}, Hostname:"172.24.4.191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.598 [INFO][2730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.598 [INFO][2730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.598 [INFO][2730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.191' Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.610 [INFO][2730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.617 [INFO][2730] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.623 [INFO][2730] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.626 [INFO][2730] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.629 [INFO][2730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.629 [INFO][2730] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.631 [INFO][2730] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35 Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.637 [INFO][2730] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.649 [INFO][2730] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.650 [INFO][2730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" host="172.24.4.191" Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.650 [INFO][2730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:01.693167 containerd[1460]: 2025-07-07 01:26:01.650 [INFO][2730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" HandleID="k8s-pod-network.f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:01.698215 containerd[1460]: 2025-07-07 01:26:01.654 [INFO][2719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-csi--node--driver--jjttr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"", Pod:"csi-node-driver-jjttr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali151a42b30e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:01.698215 containerd[1460]: 2025-07-07 01:26:01.654 [INFO][2719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.129/32] ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:01.698215 containerd[1460]: 2025-07-07 01:26:01.654 [INFO][2719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali151a42b30e3 ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:01.698215 containerd[1460]: 2025-07-07 01:26:01.668 [INFO][2719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:01.698215 containerd[1460]: 2025-07-07 01:26:01.671 [INFO][2719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-csi--node--driver--jjttr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35", Pod:"csi-node-driver-jjttr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali151a42b30e3", MAC:"82:fa:b2:91:77:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:01.698215 containerd[1460]: 2025-07-07 01:26:01.690 [INFO][2719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35" Namespace="calico-system" Pod="csi-node-driver-jjttr" WorkloadEndpoint="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:01.733083 kubelet[1843]: E0707 01:26:01.732292 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:01.745675 containerd[1460]: time="2025-07-07T01:26:01.745441129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:26:01.745675 containerd[1460]: time="2025-07-07T01:26:01.745544864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:26:01.745675 containerd[1460]: time="2025-07-07T01:26:01.745560183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:01.747835 containerd[1460]: time="2025-07-07T01:26:01.746952361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:01.779922 systemd[1]: Started cri-containerd-f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35.scope - libcontainer container f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35. Jul 7 01:26:01.813283 containerd[1460]: time="2025-07-07T01:26:01.813212409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjttr,Uid:2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f,Namespace:calico-system,Attempt:1,} returns sandbox id \"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35\"" Jul 7 01:26:01.817496 containerd[1460]: time="2025-07-07T01:26:01.817450890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 01:26:02.733393 kubelet[1843]: E0707 01:26:02.733302 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:02.817228 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Jul 7 01:26:03.649136 systemd-networkd[1374]: cali151a42b30e3: Gained IPv6LL Jul 7 01:26:03.733900 kubelet[1843]: E0707 01:26:03.733825 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:04.246440 containerd[1460]: time="2025-07-07T01:26:04.246359006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:04.248670 containerd[1460]: time="2025-07-07T01:26:04.248354968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 01:26:04.251048 containerd[1460]: time="2025-07-07T01:26:04.250019738Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:04.253853 containerd[1460]: time="2025-07-07T01:26:04.253809271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:04.254856 containerd[1460]: time="2025-07-07T01:26:04.254601690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.437097169s" Jul 7 01:26:04.254856 containerd[1460]: time="2025-07-07T01:26:04.254668365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 01:26:04.257913 containerd[1460]: time="2025-07-07T01:26:04.257681028Z" level=info msg="CreateContainer within sandbox \"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 01:26:04.283563 containerd[1460]: time="2025-07-07T01:26:04.283524577Z" level=info msg="CreateContainer within sandbox \"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bc863a8e7679c37ab506ce4985d00e76133ecb77ccd780267d445b62f03c2137\"" Jul 7 01:26:04.286778 containerd[1460]: time="2025-07-07T01:26:04.284682032Z" level=info msg="StartContainer for \"bc863a8e7679c37ab506ce4985d00e76133ecb77ccd780267d445b62f03c2137\"" Jul 7 01:26:04.326009 systemd[1]: run-containerd-runc-k8s.io-bc863a8e7679c37ab506ce4985d00e76133ecb77ccd780267d445b62f03c2137-runc.JlwEmC.mount: Deactivated successfully. Jul 7 01:26:04.334916 systemd[1]: Started cri-containerd-bc863a8e7679c37ab506ce4985d00e76133ecb77ccd780267d445b62f03c2137.scope - libcontainer container bc863a8e7679c37ab506ce4985d00e76133ecb77ccd780267d445b62f03c2137. Jul 7 01:26:04.369126 containerd[1460]: time="2025-07-07T01:26:04.368826859Z" level=info msg="StartContainer for \"bc863a8e7679c37ab506ce4985d00e76133ecb77ccd780267d445b62f03c2137\" returns successfully" Jul 7 01:26:04.370917 containerd[1460]: time="2025-07-07T01:26:04.370873456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 01:26:04.734812 kubelet[1843]: E0707 01:26:04.734698 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:05.735097 kubelet[1843]: E0707 01:26:05.734949 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:06.736170 kubelet[1843]: E0707 01:26:06.736097 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:06.765676 containerd[1460]: time="2025-07-07T01:26:06.765618561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:06.767579 containerd[1460]: time="2025-07-07T01:26:06.767529061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 01:26:06.768848 containerd[1460]: time="2025-07-07T01:26:06.768821169Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:06.772522 containerd[1460]: time="2025-07-07T01:26:06.772492618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:06.773214 containerd[1460]: time="2025-07-07T01:26:06.773032823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.401964882s" Jul 7 01:26:06.773214 containerd[1460]: time="2025-07-07T01:26:06.773071988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 01:26:06.776790 containerd[1460]: time="2025-07-07T01:26:06.776557056Z" level=info msg="CreateContainer within sandbox \"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 01:26:06.799179 containerd[1460]: time="2025-07-07T01:26:06.799059844Z" level=info msg="CreateContainer within sandbox \"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"61b1db6bbe02d85d7ad8ad2eed13be597d7c2b64042f745b6074d1533b0b524e\"" Jul 7 01:26:06.800203 containerd[1460]: time="2025-07-07T01:26:06.800042991Z" level=info msg="StartContainer for \"61b1db6bbe02d85d7ad8ad2eed13be597d7c2b64042f745b6074d1533b0b524e\"" Jul 7 01:26:06.833911 systemd[1]: Started cri-containerd-61b1db6bbe02d85d7ad8ad2eed13be597d7c2b64042f745b6074d1533b0b524e.scope - libcontainer container 61b1db6bbe02d85d7ad8ad2eed13be597d7c2b64042f745b6074d1533b0b524e. Jul 7 01:26:06.852894 containerd[1460]: time="2025-07-07T01:26:06.851030556Z" level=info msg="StopPodSandbox for \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\"" Jul 7 01:26:06.875002 containerd[1460]: time="2025-07-07T01:26:06.874871428Z" level=info msg="StartContainer for \"61b1db6bbe02d85d7ad8ad2eed13be597d7c2b64042f745b6074d1533b0b524e\" returns successfully" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.933 [INFO][2881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.933 [INFO][2881] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" iface="eth0" netns="/var/run/netns/cni-0da4934d-308c-bcab-7afd-47293b26728b" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.934 [INFO][2881] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" iface="eth0" netns="/var/run/netns/cni-0da4934d-308c-bcab-7afd-47293b26728b" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.934 [INFO][2881] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" iface="eth0" netns="/var/run/netns/cni-0da4934d-308c-bcab-7afd-47293b26728b" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.934 [INFO][2881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.934 [INFO][2881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.969 [INFO][2896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.969 [INFO][2896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.969 [INFO][2896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.986 [WARNING][2896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.986 [INFO][2896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.989 [INFO][2896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:06.994461 containerd[1460]: 2025-07-07 01:26:06.991 [INFO][2881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:06.998179 containerd[1460]: time="2025-07-07T01:26:06.994912518Z" level=info msg="TearDown network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\" successfully" Jul 7 01:26:06.998179 containerd[1460]: time="2025-07-07T01:26:06.994985406Z" level=info msg="StopPodSandbox for \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\" returns successfully" Jul 7 01:26:06.998179 containerd[1460]: time="2025-07-07T01:26:06.996396567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-bkfw9,Uid:239f2938-d466-40c8-9675-beca1c58522d,Namespace:default,Attempt:1,}" Jul 7 01:26:06.997663 systemd[1]: run-netns-cni\x2d0da4934d\x2d308c\x2dbcab\x2d7afd\x2d47293b26728b.mount: Deactivated successfully. Jul 7 01:26:07.115573 kubelet[1843]: I0707 01:26:07.115245 1843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jjttr" podStartSLOduration=30.156663439 podStartE2EDuration="35.115109416s" podCreationTimestamp="2025-07-07 01:25:32 +0000 UTC" firstStartedPulling="2025-07-07 01:26:01.816011594 +0000 UTC m=+31.547060842" lastFinishedPulling="2025-07-07 01:26:06.774457581 +0000 UTC m=+36.505506819" observedRunningTime="2025-07-07 01:26:07.112617523 +0000 UTC m=+36.843666791" watchObservedRunningTime="2025-07-07 01:26:07.115109416 +0000 UTC m=+36.846158654" Jul 7 01:26:07.238198 systemd-networkd[1374]: calif2b823de1be: Link UP Jul 7 01:26:07.242286 systemd-networkd[1374]: calif2b823de1be: Gained carrier Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.072 [INFO][2903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0 nginx-deployment-8587fbcb89- default 239f2938-d466-40c8-9675-beca1c58522d 1370 0 2025-07-07 01:25:51 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.191 nginx-deployment-8587fbcb89-bkfw9 eth0 default [] [] [kns.default ksa.default.default] calif2b823de1be [] [] }} ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.072 [INFO][2903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.136 [INFO][2914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" HandleID="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.136 [INFO][2914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" HandleID="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102e20), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.191", "pod":"nginx-deployment-8587fbcb89-bkfw9", "timestamp":"2025-07-07 01:26:07.136515406 +0000 UTC"}, Hostname:"172.24.4.191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.136 [INFO][2914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.136 [INFO][2914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.136 [INFO][2914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.191' Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.160 [INFO][2914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.176 [INFO][2914] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.193 [INFO][2914] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.199 [INFO][2914] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.203 [INFO][2914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.203 [INFO][2914] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.205 [INFO][2914] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03 Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.212 [INFO][2914] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.225 [INFO][2914] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.225 [INFO][2914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" host="172.24.4.191" Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.225 [INFO][2914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:07.275511 containerd[1460]: 2025-07-07 01:26:07.225 [INFO][2914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" HandleID="k8s-pod-network.98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:07.285956 containerd[1460]: 2025-07-07 01:26:07.229 [INFO][2903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"239f2938-d466-40c8-9675-beca1c58522d", ResourceVersion:"1370", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-bkfw9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2b823de1be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:07.285956 containerd[1460]: 2025-07-07 01:26:07.229 [INFO][2903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.130/32] ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:07.285956 containerd[1460]: 2025-07-07 01:26:07.230 [INFO][2903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2b823de1be ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:07.285956 containerd[1460]: 2025-07-07 01:26:07.243 [INFO][2903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:07.285956 containerd[1460]: 2025-07-07 01:26:07.247 [INFO][2903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"239f2938-d466-40c8-9675-beca1c58522d", ResourceVersion:"1370", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03", Pod:"nginx-deployment-8587fbcb89-bkfw9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2b823de1be", MAC:"52:a1:dc:3c:9b:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:07.285956 containerd[1460]: 2025-07-07 01:26:07.264 [INFO][2903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03" Namespace="default" Pod="nginx-deployment-8587fbcb89-bkfw9" WorkloadEndpoint="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:07.320484 containerd[1460]: time="2025-07-07T01:26:07.320082962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:26:07.320484 containerd[1460]: time="2025-07-07T01:26:07.320154718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:26:07.320484 containerd[1460]: time="2025-07-07T01:26:07.320205593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:07.320484 containerd[1460]: time="2025-07-07T01:26:07.320314327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:07.339926 systemd[1]: Started cri-containerd-98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03.scope - libcontainer container 98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03. Jul 7 01:26:07.381248 containerd[1460]: time="2025-07-07T01:26:07.381202443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-bkfw9,Uid:239f2938-d466-40c8-9675-beca1c58522d,Namespace:default,Attempt:1,} returns sandbox id \"98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03\"" Jul 7 01:26:07.383513 containerd[1460]: time="2025-07-07T01:26:07.383487878Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 7 01:26:07.737431 kubelet[1843]: E0707 01:26:07.737349 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:07.886631 kubelet[1843]: I0707 01:26:07.886478 1843 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 01:26:07.887003 kubelet[1843]: I0707 01:26:07.886713 1843 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 01:26:08.705186 systemd-networkd[1374]: calif2b823de1be: Gained IPv6LL Jul 7 01:26:08.738895 kubelet[1843]: E0707 01:26:08.738675 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:09.739072 kubelet[1843]: E0707 01:26:09.739018 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:10.740274 kubelet[1843]: E0707 01:26:10.740224 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:11.343230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742774878.mount: Deactivated successfully. Jul 7 01:26:11.688930 kubelet[1843]: E0707 01:26:11.686786 1843 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:11.740903 kubelet[1843]: E0707 01:26:11.740827 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:12.742145 kubelet[1843]: E0707 01:26:12.742093 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:12.970182 containerd[1460]: time="2025-07-07T01:26:12.969911687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:12.972290 containerd[1460]: time="2025-07-07T01:26:12.971684457Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73313230" Jul 7 01:26:12.975083 containerd[1460]: time="2025-07-07T01:26:12.975013288Z" level=info msg="ImageCreate event name:\"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:12.983699 containerd[1460]: time="2025-07-07T01:26:12.983593442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:12.984850 containerd[1460]: time="2025-07-07T01:26:12.984705760Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\", size \"73313108\" in 5.601184992s" Jul 7 01:26:12.984850 containerd[1460]: time="2025-07-07T01:26:12.984740506Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\"" Jul 7 01:26:12.989280 containerd[1460]: time="2025-07-07T01:26:12.989151691Z" level=info msg="CreateContainer within sandbox \"98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 7 01:26:13.009768 containerd[1460]: time="2025-07-07T01:26:13.009518167Z" level=info msg="CreateContainer within sandbox \"98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c9a0d638b0af32ceddb54c75e97c95e800eafd928f152e401b4ef60aae2f86d0\"" Jul 7 01:26:13.012721 containerd[1460]: time="2025-07-07T01:26:13.011801716Z" level=info msg="StartContainer for \"c9a0d638b0af32ceddb54c75e97c95e800eafd928f152e401b4ef60aae2f86d0\"" Jul 7 01:26:13.070992 systemd[1]: run-containerd-runc-k8s.io-c9a0d638b0af32ceddb54c75e97c95e800eafd928f152e401b4ef60aae2f86d0-runc.U6KBpZ.mount: Deactivated successfully. Jul 7 01:26:13.080982 systemd[1]: Started cri-containerd-c9a0d638b0af32ceddb54c75e97c95e800eafd928f152e401b4ef60aae2f86d0.scope - libcontainer container c9a0d638b0af32ceddb54c75e97c95e800eafd928f152e401b4ef60aae2f86d0. Jul 7 01:26:13.122641 containerd[1460]: time="2025-07-07T01:26:13.122411133Z" level=info msg="StartContainer for \"c9a0d638b0af32ceddb54c75e97c95e800eafd928f152e401b4ef60aae2f86d0\" returns successfully" Jul 7 01:26:13.743088 kubelet[1843]: E0707 01:26:13.743011 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:14.743959 kubelet[1843]: E0707 01:26:14.743835 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:14.930513 systemd[1]: run-containerd-runc-k8s.io-36cdb1a22d58c43f88147073bc040e4488195b4b787114bf7a4ec4e278e8bf8d-runc.Zq9i2j.mount: Deactivated successfully. Jul 7 01:26:15.082323 kubelet[1843]: I0707 01:26:15.082020 1843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-bkfw9" podStartSLOduration=18.478555031 podStartE2EDuration="24.081997703s" podCreationTimestamp="2025-07-07 01:25:51 +0000 UTC" firstStartedPulling="2025-07-07 01:26:07.383121339 +0000 UTC m=+37.114170577" lastFinishedPulling="2025-07-07 01:26:12.986564011 +0000 UTC m=+42.717613249" observedRunningTime="2025-07-07 01:26:14.137469941 +0000 UTC m=+43.868519229" watchObservedRunningTime="2025-07-07 01:26:15.081997703 +0000 UTC m=+44.813046941" Jul 7 01:26:15.744959 kubelet[1843]: E0707 01:26:15.744809 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:16.745954 kubelet[1843]: E0707 01:26:16.745827 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:17.746605 kubelet[1843]: E0707 01:26:17.746509 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:18.747593 kubelet[1843]: E0707 01:26:18.747469 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:19.747852 kubelet[1843]: E0707 01:26:19.747731 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:20.748518 kubelet[1843]: E0707 01:26:20.748391 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:21.749230 kubelet[1843]: E0707 01:26:21.749083 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:22.741616 systemd[1]: Created slice kubepods-besteffort-pode0795c58_863d_4f61_bd21_89bb82835845.slice - libcontainer container kubepods-besteffort-pode0795c58_863d_4f61_bd21_89bb82835845.slice. Jul 7 01:26:22.750332 kubelet[1843]: E0707 01:26:22.750265 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:22.827204 kubelet[1843]: I0707 01:26:22.826832 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e0795c58-863d-4f61-bd21-89bb82835845-data\") pod \"nfs-server-provisioner-0\" (UID: \"e0795c58-863d-4f61-bd21-89bb82835845\") " pod="default/nfs-server-provisioner-0" Jul 7 01:26:22.827204 kubelet[1843]: I0707 01:26:22.826951 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lmkg\" (UniqueName: \"kubernetes.io/projected/e0795c58-863d-4f61-bd21-89bb82835845-kube-api-access-4lmkg\") pod \"nfs-server-provisioner-0\" (UID: \"e0795c58-863d-4f61-bd21-89bb82835845\") " pod="default/nfs-server-provisioner-0" Jul 7 01:26:23.051308 containerd[1460]: time="2025-07-07T01:26:23.050339212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e0795c58-863d-4f61-bd21-89bb82835845,Namespace:default,Attempt:0,}" Jul 7 01:26:23.279610 systemd-networkd[1374]: cali60e51b789ff: Link UP Jul 7 01:26:23.281073 systemd-networkd[1374]: cali60e51b789ff: Gained carrier Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.163 [INFO][3130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.191-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default e0795c58-863d-4f61-bd21-89bb82835845 1439 0 2025-07-07 01:26:22 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.191 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.163 [INFO][3130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.222 [INFO][3141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" HandleID="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Workload="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.223 [INFO][3141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" HandleID="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Workload="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4150), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.191", "pod":"nfs-server-provisioner-0", "timestamp":"2025-07-07 01:26:23.222653019 +0000 UTC"}, Hostname:"172.24.4.191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.223 [INFO][3141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.223 [INFO][3141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.223 [INFO][3141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.191' Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.234 [INFO][3141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.241 [INFO][3141] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.247 [INFO][3141] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.250 [INFO][3141] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.253 [INFO][3141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.253 [INFO][3141] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.256 [INFO][3141] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504 Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.261 [INFO][3141] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.273 [INFO][3141] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.273 [INFO][3141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" host="172.24.4.191" Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.273 [INFO][3141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:23.302405 containerd[1460]: 2025-07-07 01:26:23.273 [INFO][3141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" HandleID="k8s-pod-network.8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Workload="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" Jul 7 01:26:23.303389 containerd[1460]: 2025-07-07 01:26:23.275 [INFO][3130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e0795c58-863d-4f61-bd21-89bb82835845", ResourceVersion:"1439", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 26, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:23.303389 containerd[1460]: 2025-07-07 01:26:23.275 [INFO][3130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.131/32] ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" Jul 7 01:26:23.303389 containerd[1460]: 2025-07-07 01:26:23.275 [INFO][3130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" Jul 7 01:26:23.303389 containerd[1460]: 2025-07-07 01:26:23.279 [INFO][3130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" Jul 7 01:26:23.303660 containerd[1460]: 2025-07-07 01:26:23.280 [INFO][3130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e0795c58-863d-4f61-bd21-89bb82835845", ResourceVersion:"1439", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 26, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"da:8b:ff:64:13:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:23.303660 containerd[1460]: 2025-07-07 01:26:23.300 [INFO][3130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.191-k8s-nfs--server--provisioner--0-eth0" Jul 7 01:26:23.332098 containerd[1460]: time="2025-07-07T01:26:23.331985246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:26:23.332450 containerd[1460]: time="2025-07-07T01:26:23.332280139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:26:23.332450 containerd[1460]: time="2025-07-07T01:26:23.332320755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:23.332722 containerd[1460]: time="2025-07-07T01:26:23.332644123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:23.361955 systemd[1]: Started cri-containerd-8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504.scope - libcontainer container 8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504. Jul 7 01:26:23.404462 containerd[1460]: time="2025-07-07T01:26:23.404386814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e0795c58-863d-4f61-bd21-89bb82835845,Namespace:default,Attempt:0,} returns sandbox id \"8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504\"" Jul 7 01:26:23.406572 containerd[1460]: time="2025-07-07T01:26:23.406551347Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 7 01:26:23.751542 kubelet[1843]: E0707 01:26:23.751392 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:24.751598 kubelet[1843]: E0707 01:26:24.751541 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:24.771841 systemd-networkd[1374]: cali60e51b789ff: Gained IPv6LL Jul 7 01:26:25.753001 kubelet[1843]: E0707 01:26:25.752897 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:26.753679 kubelet[1843]: E0707 01:26:26.753597 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:27.002737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19151305.mount: Deactivated successfully. Jul 7 01:26:27.755135 kubelet[1843]: E0707 01:26:27.754203 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:28.754399 kubelet[1843]: E0707 01:26:28.754356 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:29.755570 kubelet[1843]: E0707 01:26:29.755499 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:29.941670 containerd[1460]: time="2025-07-07T01:26:29.941547219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:29.943503 containerd[1460]: time="2025-07-07T01:26:29.943302704Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jul 7 01:26:29.946263 containerd[1460]: time="2025-07-07T01:26:29.944696970Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:29.948334 containerd[1460]: time="2025-07-07T01:26:29.948305380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:29.949409 containerd[1460]: time="2025-07-07T01:26:29.949360169Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.542654533s" Jul 7 01:26:29.949469 containerd[1460]: time="2025-07-07T01:26:29.949415193Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 7 01:26:29.953442 containerd[1460]: time="2025-07-07T01:26:29.953402534Z" level=info msg="CreateContainer within sandbox \"8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 7 01:26:29.984282 containerd[1460]: time="2025-07-07T01:26:29.984068353Z" level=info msg="CreateContainer within sandbox \"8862b98815e5306180c60d1fcb90748ae9b22a0a5bc9f3dede78434ea9c03504\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c0d6a3e53187becded629e1a128f5a85478772c262e0b6476bb85219c4bc8a24\"" Jul 7 01:26:29.985828 containerd[1460]: time="2025-07-07T01:26:29.985788851Z" level=info msg="StartContainer for \"c0d6a3e53187becded629e1a128f5a85478772c262e0b6476bb85219c4bc8a24\"" Jul 7 01:26:30.044193 systemd[1]: run-containerd-runc-k8s.io-c0d6a3e53187becded629e1a128f5a85478772c262e0b6476bb85219c4bc8a24-runc.GYPkhg.mount: Deactivated successfully. Jul 7 01:26:30.052064 systemd[1]: Started cri-containerd-c0d6a3e53187becded629e1a128f5a85478772c262e0b6476bb85219c4bc8a24.scope - libcontainer container c0d6a3e53187becded629e1a128f5a85478772c262e0b6476bb85219c4bc8a24. Jul 7 01:26:30.093121 containerd[1460]: time="2025-07-07T01:26:30.093033083Z" level=info msg="StartContainer for \"c0d6a3e53187becded629e1a128f5a85478772c262e0b6476bb85219c4bc8a24\" returns successfully" Jul 7 01:26:30.362058 kubelet[1843]: I0707 01:26:30.361498 1843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.81684753 podStartE2EDuration="8.361436714s" podCreationTimestamp="2025-07-07 01:26:22 +0000 UTC" firstStartedPulling="2025-07-07 01:26:23.406178015 +0000 UTC m=+53.137227253" lastFinishedPulling="2025-07-07 01:26:29.950766899 +0000 UTC m=+59.681816437" observedRunningTime="2025-07-07 01:26:30.361419472 +0000 UTC m=+60.092468770" watchObservedRunningTime="2025-07-07 01:26:30.361436714 +0000 UTC m=+60.092486012" Jul 7 01:26:30.756361 kubelet[1843]: E0707 01:26:30.756157 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:31.686676 kubelet[1843]: E0707 01:26:31.686573 1843 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:31.740069 containerd[1460]: time="2025-07-07T01:26:31.739361408Z" level=info msg="StopPodSandbox for \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\"" Jul 7 01:26:31.757025 kubelet[1843]: E0707 01:26:31.756939 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.872 [WARNING][3291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-csi--node--driver--jjttr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f", ResourceVersion:"1371", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35", Pod:"csi-node-driver-jjttr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali151a42b30e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.873 [INFO][3291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.874 [INFO][3291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" iface="eth0" netns="" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.874 [INFO][3291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.874 [INFO][3291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.944 [INFO][3300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.944 [INFO][3300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.944 [INFO][3300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.955 [WARNING][3300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.955 [INFO][3300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.957 [INFO][3300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:31.959602 containerd[1460]: 2025-07-07 01:26:31.958 [INFO][3291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:31.961121 containerd[1460]: time="2025-07-07T01:26:31.959640282Z" level=info msg="TearDown network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\" successfully" Jul 7 01:26:31.961121 containerd[1460]: time="2025-07-07T01:26:31.959667754Z" level=info msg="StopPodSandbox for \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\" returns successfully" Jul 7 01:26:31.961121 containerd[1460]: time="2025-07-07T01:26:31.960332160Z" level=info msg="RemovePodSandbox for \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\"" Jul 7 01:26:31.961121 containerd[1460]: time="2025-07-07T01:26:31.960366094Z" level=info msg="Forcibly stopping sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\"" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.007 [WARNING][3314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-csi--node--driver--jjttr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d5bb8b5-db3a-42bb-aee7-fa6cf1eac43f", ResourceVersion:"1371", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"f4a964b3c83395a78fab45f0d9977aee1658d0f1da2a7a26b01299a1b2420e35", Pod:"csi-node-driver-jjttr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali151a42b30e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.007 [INFO][3314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.007 [INFO][3314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" iface="eth0" netns="" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.007 [INFO][3314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.007 [INFO][3314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.040 [INFO][3321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.040 [INFO][3321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.040 [INFO][3321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.055 [WARNING][3321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.055 [INFO][3321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" HandleID="k8s-pod-network.75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Workload="172.24.4.191-k8s-csi--node--driver--jjttr-eth0" Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.058 [INFO][3321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:32.062383 containerd[1460]: 2025-07-07 01:26:32.059 [INFO][3314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d" Jul 7 01:26:32.065013 containerd[1460]: time="2025-07-07T01:26:32.062878258Z" level=info msg="TearDown network for sandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\" successfully" Jul 7 01:26:32.068010 containerd[1460]: time="2025-07-07T01:26:32.067905950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:26:32.068229 containerd[1460]: time="2025-07-07T01:26:32.068049931Z" level=info msg="RemovePodSandbox \"75b2cdeb8b5e212f36cf65a38e73e45784593fb8ce531d8cb1ce2552ef9f370d\" returns successfully" Jul 7 01:26:32.069469 containerd[1460]: time="2025-07-07T01:26:32.069386578Z" level=info msg="StopPodSandbox for \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\"" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.149 [WARNING][3336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"239f2938-d466-40c8-9675-beca1c58522d", ResourceVersion:"1398", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03", Pod:"nginx-deployment-8587fbcb89-bkfw9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2b823de1be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.149 [INFO][3336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.149 [INFO][3336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" iface="eth0" netns="" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.149 [INFO][3336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.149 [INFO][3336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.184 [INFO][3344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.185 [INFO][3344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.185 [INFO][3344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.198 [WARNING][3344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.198 [INFO][3344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.200 [INFO][3344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:32.203139 containerd[1460]: 2025-07-07 01:26:32.202 [INFO][3336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.204311 containerd[1460]: time="2025-07-07T01:26:32.203161797Z" level=info msg="TearDown network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\" successfully" Jul 7 01:26:32.204311 containerd[1460]: time="2025-07-07T01:26:32.203188627Z" level=info msg="StopPodSandbox for \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\" returns successfully" Jul 7 01:26:32.204311 containerd[1460]: time="2025-07-07T01:26:32.203848655Z" level=info msg="RemovePodSandbox for \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\"" Jul 7 01:26:32.204311 containerd[1460]: time="2025-07-07T01:26:32.203875976Z" level=info msg="Forcibly stopping sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\"" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.245 [WARNING][3358] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"239f2938-d466-40c8-9675-beca1c58522d", ResourceVersion:"1398", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"98234a46402d04fbcb497c3c41b8a557bdee8a85d6b356ec9d5a49ecd9612a03", Pod:"nginx-deployment-8587fbcb89-bkfw9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2b823de1be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.245 [INFO][3358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.245 [INFO][3358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" iface="eth0" netns="" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.245 [INFO][3358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.245 [INFO][3358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.274 [INFO][3365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.275 [INFO][3365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.275 [INFO][3365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.283 [WARNING][3365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.283 [INFO][3365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" HandleID="k8s-pod-network.03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Workload="172.24.4.191-k8s-nginx--deployment--8587fbcb89--bkfw9-eth0" Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.285 [INFO][3365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:32.289550 containerd[1460]: 2025-07-07 01:26:32.286 [INFO][3358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8" Jul 7 01:26:32.289550 containerd[1460]: time="2025-07-07T01:26:32.288894447Z" level=info msg="TearDown network for sandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\" successfully" Jul 7 01:26:32.293100 containerd[1460]: time="2025-07-07T01:26:32.293060694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:26:32.293170 containerd[1460]: time="2025-07-07T01:26:32.293109946Z" level=info msg="RemovePodSandbox \"03d278563755540ce40225bceb5576da6e11b82c901e85a074e046baa5b9acd8\" returns successfully" Jul 7 01:26:32.758012 kubelet[1843]: E0707 01:26:32.757895 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:33.759123 kubelet[1843]: E0707 01:26:33.758993 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:34.759859 kubelet[1843]: E0707 01:26:34.759587 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:35.760618 kubelet[1843]: E0707 01:26:35.760530 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:36.761614 kubelet[1843]: E0707 01:26:36.761517 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:37.762243 kubelet[1843]: E0707 01:26:37.762149 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:38.762629 kubelet[1843]: E0707 01:26:38.762537 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:39.762903 kubelet[1843]: E0707 01:26:39.762806 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:40.763557 kubelet[1843]: E0707 01:26:40.763405 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:41.764653 kubelet[1843]: E0707 01:26:41.764551 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:42.765537 kubelet[1843]: E0707 01:26:42.765403 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:43.766510 kubelet[1843]: E0707 01:26:43.766389 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:44.767565 kubelet[1843]: E0707 01:26:44.767397 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:45.768414 kubelet[1843]: E0707 01:26:45.768261 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:46.768721 kubelet[1843]: E0707 01:26:46.768491 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:47.769653 kubelet[1843]: E0707 01:26:47.769527 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:48.770209 kubelet[1843]: E0707 01:26:48.770087 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:49.771249 kubelet[1843]: E0707 01:26:49.771140 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:50.772110 kubelet[1843]: E0707 01:26:50.772005 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:51.688109 kubelet[1843]: E0707 01:26:51.687957 1843 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:51.773094 kubelet[1843]: E0707 01:26:51.773026 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:52.774138 kubelet[1843]: E0707 01:26:52.773906 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:53.774252 kubelet[1843]: E0707 01:26:53.774114 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:54.724725 systemd[1]: Created slice kubepods-besteffort-pod3a61ed0d_f9ff_4ac3_b249_40f306df26c3.slice - libcontainer container kubepods-besteffort-pod3a61ed0d_f9ff_4ac3_b249_40f306df26c3.slice. Jul 7 01:26:54.775432 kubelet[1843]: E0707 01:26:54.775206 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:54.818288 kubelet[1843]: I0707 01:26:54.818199 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fda7d9d4-06eb-4386-a8d3-c93332aac0af\" (UniqueName: \"kubernetes.io/nfs/3a61ed0d-f9ff-4ac3-b249-40f306df26c3-pvc-fda7d9d4-06eb-4386-a8d3-c93332aac0af\") pod \"test-pod-1\" (UID: \"3a61ed0d-f9ff-4ac3-b249-40f306df26c3\") " pod="default/test-pod-1" Jul 7 01:26:54.818419 kubelet[1843]: I0707 01:26:54.818307 1843 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksrw\" (UniqueName: \"kubernetes.io/projected/3a61ed0d-f9ff-4ac3-b249-40f306df26c3-kube-api-access-bksrw\") pod \"test-pod-1\" (UID: \"3a61ed0d-f9ff-4ac3-b249-40f306df26c3\") " pod="default/test-pod-1" Jul 7 01:26:55.013883 kernel: FS-Cache: Loaded Jul 7 01:26:55.131915 kernel: RPC: Registered named UNIX socket transport module. Jul 7 01:26:55.132105 kernel: RPC: Registered udp transport module. Jul 7 01:26:55.132130 kernel: RPC: Registered tcp transport module. Jul 7 01:26:55.132161 kernel: RPC: Registered tcp-with-tls transport module. Jul 7 01:26:55.132884 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 7 01:26:55.481040 kernel: NFS: Registering the id_resolver key type Jul 7 01:26:55.482708 kernel: Key type id_resolver registered Jul 7 01:26:55.483380 kernel: Key type id_legacy registered Jul 7 01:26:55.551624 nfsidmap[3437]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jul 7 01:26:55.560167 nfsidmap[3438]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jul 7 01:26:55.634649 containerd[1460]: time="2025-07-07T01:26:55.634542032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3a61ed0d-f9ff-4ac3-b249-40f306df26c3,Namespace:default,Attempt:0,}" Jul 7 01:26:55.775965 kubelet[1843]: E0707 01:26:55.775658 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:55.867342 systemd-networkd[1374]: cali5ec59c6bf6e: Link UP Jul 7 01:26:55.867936 systemd-networkd[1374]: cali5ec59c6bf6e: Gained carrier Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.736 [INFO][3439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.191-k8s-test--pod--1-eth0 default 3a61ed0d-f9ff-4ac3-b249-40f306df26c3 1538 0 2025-07-07 01:26:25 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.191 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.737 [INFO][3439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-eth0" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.786 [INFO][3451] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" HandleID="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Workload="172.24.4.191-k8s-test--pod--1-eth0" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.786 [INFO][3451] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" HandleID="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Workload="172.24.4.191-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f660), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.191", "pod":"test-pod-1", "timestamp":"2025-07-07 01:26:55.7861038 +0000 UTC"}, Hostname:"172.24.4.191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.786 [INFO][3451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.786 [INFO][3451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.786 [INFO][3451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.191' Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.799 [INFO][3451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.812 [INFO][3451] ipam/ipam.go 394: Looking up existing affinities for host host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.824 [INFO][3451] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.828 [INFO][3451] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.833 [INFO][3451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.833 [INFO][3451] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.836 [INFO][3451] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657 Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.848 [INFO][3451] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.857 [INFO][3451] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.857 [INFO][3451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" host="172.24.4.191" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.857 [INFO][3451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.857 [INFO][3451] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" HandleID="k8s-pod-network.93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Workload="172.24.4.191-k8s-test--pod--1-eth0" Jul 7 01:26:55.892843 containerd[1460]: 2025-07-07 01:26:55.860 [INFO][3439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3a61ed0d-f9ff-4ac3-b249-40f306df26c3", ResourceVersion:"1538", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:55.898436 containerd[1460]: 2025-07-07 01:26:55.860 [INFO][3439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.132/32] ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-eth0" Jul 7 01:26:55.898436 containerd[1460]: 2025-07-07 01:26:55.860 [INFO][3439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-eth0" Jul 7 01:26:55.898436 containerd[1460]: 2025-07-07 01:26:55.867 [INFO][3439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-eth0" Jul 7 01:26:55.898436 containerd[1460]: 2025-07-07 01:26:55.870 [INFO][3439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.191-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3a61ed0d-f9ff-4ac3-b249-40f306df26c3", ResourceVersion:"1538", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.191", ContainerID:"93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"92:1a:d4:c8:a3:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:26:55.898436 containerd[1460]: 2025-07-07 01:26:55.888 [INFO][3439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.191-k8s-test--pod--1-eth0" Jul 7 01:26:55.946270 containerd[1460]: time="2025-07-07T01:26:55.946133782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:26:55.946706 containerd[1460]: time="2025-07-07T01:26:55.946514384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:26:55.946706 containerd[1460]: time="2025-07-07T01:26:55.946551133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:55.947779 containerd[1460]: time="2025-07-07T01:26:55.947471718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:26:55.973277 systemd[1]: run-containerd-runc-k8s.io-93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657-runc.ygR6gU.mount: Deactivated successfully. Jul 7 01:26:55.984949 systemd[1]: Started cri-containerd-93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657.scope - libcontainer container 93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657. Jul 7 01:26:56.029462 containerd[1460]: time="2025-07-07T01:26:56.029345839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3a61ed0d-f9ff-4ac3-b249-40f306df26c3,Namespace:default,Attempt:0,} returns sandbox id \"93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657\"" Jul 7 01:26:56.034316 containerd[1460]: time="2025-07-07T01:26:56.034273130Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 7 01:26:56.593576 containerd[1460]: time="2025-07-07T01:26:56.592718763Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:26:56.594729 containerd[1460]: time="2025-07-07T01:26:56.594592623Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 7 01:26:56.604662 containerd[1460]: time="2025-07-07T01:26:56.604589673Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\", size \"73313108\" in 570.254637ms" Jul 7 01:26:56.605694 containerd[1460]: time="2025-07-07T01:26:56.604865390Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\"" Jul 7 01:26:56.610560 containerd[1460]: time="2025-07-07T01:26:56.610462266Z" level=info msg="CreateContainer within sandbox \"93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 7 01:26:56.648686 containerd[1460]: time="2025-07-07T01:26:56.648437023Z" level=info msg="CreateContainer within sandbox \"93c990e01cdb4d9921c203992e8441ffbefc66b85bdecedfad6af06e3250c657\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"85d647acfbb882209c71de1aa23ecaf4909f29ab2754e358f13edea8f34211e4\"" Jul 7 01:26:56.666461 containerd[1460]: time="2025-07-07T01:26:56.666209290Z" level=info msg="StartContainer for \"85d647acfbb882209c71de1aa23ecaf4909f29ab2754e358f13edea8f34211e4\"" Jul 7 01:26:56.739120 systemd[1]: Started cri-containerd-85d647acfbb882209c71de1aa23ecaf4909f29ab2754e358f13edea8f34211e4.scope - libcontainer container 85d647acfbb882209c71de1aa23ecaf4909f29ab2754e358f13edea8f34211e4. Jul 7 01:26:56.776764 kubelet[1843]: E0707 01:26:56.776666 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:56.791485 containerd[1460]: time="2025-07-07T01:26:56.791408884Z" level=info msg="StartContainer for \"85d647acfbb882209c71de1aa23ecaf4909f29ab2754e358f13edea8f34211e4\" returns successfully" Jul 7 01:26:57.777657 kubelet[1843]: E0707 01:26:57.777538 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:57.859163 systemd-networkd[1374]: cali5ec59c6bf6e: Gained IPv6LL Jul 7 01:26:58.778199 kubelet[1843]: E0707 01:26:58.778112 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:26:59.779421 kubelet[1843]: E0707 01:26:59.779107 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:27:00.780108 kubelet[1843]: E0707 01:27:00.779995 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:27:01.781170 kubelet[1843]: E0707 01:27:01.781080 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:27:02.782141 kubelet[1843]: E0707 01:27:02.782042 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:27:03.782469 kubelet[1843]: E0707 01:27:03.782356 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:27:04.782929 kubelet[1843]: E0707 01:27:04.782836 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 01:27:05.783700 kubelet[1843]: E0707 01:27:05.783560 1843 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"