Feb 13 20:51:32.036327 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 20:51:32.036364 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 20:51:32.036377 kernel: BIOS-provided physical RAM map: Feb 13 20:51:32.036387 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:51:32.036395 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:51:32.036407 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:51:32.036418 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Feb 13 20:51:32.036427 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Feb 13 20:51:32.036437 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:51:32.036446 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:51:32.036455 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Feb 13 20:51:32.036464 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 20:51:32.036474 kernel: NX (Execute Disable) protection: active Feb 13 20:51:32.036483 kernel: APIC: Static calls initialized Feb 13 20:51:32.036497 kernel: SMBIOS 3.0.0 present. Feb 13 20:51:32.036507 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Feb 13 20:51:32.036517 kernel: Hypervisor detected: KVM Feb 13 20:51:32.036527 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:51:32.036536 kernel: kvm-clock: using sched offset of 4022754871 cycles Feb 13 20:51:32.036548 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:51:32.036574 kernel: tsc: Detected 1996.249 MHz processor Feb 13 20:51:32.036585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:51:32.036596 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:51:32.036633 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Feb 13 20:51:32.036644 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:51:32.036654 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:51:32.036664 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Feb 13 20:51:32.036675 kernel: ACPI: Early table checksum verification disabled Feb 13 20:51:32.036689 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Feb 13 20:51:32.036700 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:51:32.036712 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:51:32.036726 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:51:32.036740 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Feb 13 20:51:32.036753 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:51:32.036767 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:51:32.036778 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Feb 13 20:51:32.036788 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Feb 13 20:51:32.036802 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Feb 13 20:51:32.036812 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Feb 13 20:51:32.036823 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Feb 13 20:51:32.036844 kernel: No NUMA configuration found Feb 13 20:51:32.036859 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Feb 13 20:51:32.036873 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Feb 13 20:51:32.036885 kernel: Zone ranges: Feb 13 20:51:32.036902 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:51:32.036912 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:51:32.036924 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Feb 13 20:51:32.036938 kernel: Movable zone start for each node Feb 13 20:51:32.036953 kernel: Early memory node ranges Feb 13 20:51:32.036968 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:51:32.036982 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Feb 13 20:51:32.036993 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Feb 13 20:51:32.037009 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Feb 13 20:51:32.037020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:51:32.037034 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:51:32.037049 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Feb 13 20:51:32.037063 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:51:32.037078 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:51:32.037092 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:51:32.037106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:51:32.037121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:51:32.037142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:51:32.037156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:51:32.037171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:51:32.037185 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:51:32.037199 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:51:32.037214 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:51:32.037229 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Feb 13 20:51:32.037244 kernel: Booting paravirtualized kernel on KVM Feb 13 20:51:32.037259 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:51:32.037280 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:51:32.037295 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:51:32.037309 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:51:32.037323 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:51:32.037336 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:51:32.037352 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 20:51:32.037364 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:51:32.037375 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:51:32.037389 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:51:32.037400 kernel: Fallback order for Node 0: 0 Feb 13 20:51:32.037411 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Feb 13 20:51:32.037421 kernel: Policy zone: Normal Feb 13 20:51:32.037432 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:51:32.037442 kernel: software IO TLB: area num 2. Feb 13 20:51:32.037453 kernel: Memory: 3964168K/4193772K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 229344K reserved, 0K cma-reserved) Feb 13 20:51:32.037464 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:51:32.037477 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 20:51:32.037488 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:51:32.037498 kernel: Dynamic Preempt: voluntary Feb 13 20:51:32.037508 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:51:32.037520 kernel: rcu: RCU event tracing is enabled. Feb 13 20:51:32.037531 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:51:32.037542 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:51:32.037553 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:51:32.037564 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:51:32.037574 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:51:32.037589 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:51:32.037617 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:51:32.037628 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:51:32.038631 kernel: Console: colour VGA+ 80x25 Feb 13 20:51:32.038645 kernel: printk: console [tty0] enabled Feb 13 20:51:32.038655 kernel: printk: console [ttyS0] enabled Feb 13 20:51:32.038665 kernel: ACPI: Core revision 20230628 Feb 13 20:51:32.038674 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:51:32.038684 kernel: x2apic enabled Feb 13 20:51:32.038699 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:51:32.038709 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:51:32.038718 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 20:51:32.038728 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 13 20:51:32.038738 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:51:32.038748 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:51:32.038757 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:51:32.038767 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:51:32.038777 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:51:32.038788 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:51:32.038798 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:51:32.038808 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 13 20:51:32.038817 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:51:32.038834 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:51:32.038845 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:51:32.038855 kernel: landlock: Up and running. Feb 13 20:51:32.038865 kernel: SELinux: Initializing. Feb 13 20:51:32.038875 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:51:32.038885 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:51:32.038895 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 13 20:51:32.038905 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:51:32.038918 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:51:32.038928 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:51:32.038938 kernel: Performance Events: AMD PMU driver. Feb 13 20:51:32.038948 kernel: ... version: 0 Feb 13 20:51:32.038961 kernel: ... bit width: 48 Feb 13 20:51:32.038970 kernel: ... generic registers: 4 Feb 13 20:51:32.038980 kernel: ... value mask: 0000ffffffffffff Feb 13 20:51:32.038990 kernel: ... max period: 00007fffffffffff Feb 13 20:51:32.039000 kernel: ... fixed-purpose events: 0 Feb 13 20:51:32.039010 kernel: ... event mask: 000000000000000f Feb 13 20:51:32.039020 kernel: signal: max sigframe size: 1440 Feb 13 20:51:32.039030 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:51:32.039040 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:51:32.039050 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:51:32.039063 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:51:32.039073 kernel: .... node #0, CPUs: #1 Feb 13 20:51:32.039083 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:51:32.039093 kernel: smpboot: Max logical packages: 2 Feb 13 20:51:32.039103 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 13 20:51:32.039113 kernel: devtmpfs: initialized Feb 13 20:51:32.039123 kernel: x86/mm: Memory block size: 128MB Feb 13 20:51:32.039133 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:51:32.039144 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:51:32.039156 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:51:32.039166 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:51:32.039176 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:51:32.039187 kernel: audit: type=2000 audit(1739479890.563:1): state=initialized audit_enabled=0 res=1 Feb 13 20:51:32.039197 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:51:32.039207 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:51:32.039217 kernel: cpuidle: using governor menu Feb 13 20:51:32.039227 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:51:32.039237 kernel: dca service started, version 1.12.1 Feb 13 20:51:32.039249 kernel: PCI: Using configuration type 1 for base access Feb 13 20:51:32.039260 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:51:32.039270 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:51:32.039280 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:51:32.039290 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:51:32.039300 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:51:32.039310 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:51:32.039320 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:51:32.039330 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:51:32.039342 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:51:32.039352 kernel: ACPI: Interpreter enabled Feb 13 20:51:32.039362 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:51:32.039372 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:51:32.039382 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:51:32.039392 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:51:32.039402 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:51:32.039412 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:51:32.039590 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:51:32.043170 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:51:32.043271 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:51:32.043287 kernel: acpiphp: Slot [3] registered Feb 13 20:51:32.043298 kernel: acpiphp: Slot [4] registered Feb 13 20:51:32.043308 kernel: acpiphp: Slot [5] registered Feb 13 20:51:32.043318 kernel: acpiphp: Slot [6] registered Feb 13 20:51:32.043328 kernel: acpiphp: Slot [7] registered Feb 13 20:51:32.043343 kernel: acpiphp: Slot [8] registered Feb 13 20:51:32.043353 kernel: acpiphp: Slot [9] registered Feb 13 20:51:32.043363 kernel: acpiphp: Slot [10] registered Feb 13 20:51:32.043373 kernel: acpiphp: Slot [11] registered Feb 13 20:51:32.043383 kernel: acpiphp: Slot [12] registered Feb 13 20:51:32.043393 kernel: acpiphp: Slot [13] registered Feb 13 20:51:32.043402 kernel: acpiphp: Slot [14] registered Feb 13 20:51:32.043412 kernel: acpiphp: Slot [15] registered Feb 13 20:51:32.043422 kernel: acpiphp: Slot [16] registered Feb 13 20:51:32.043432 kernel: acpiphp: Slot [17] registered Feb 13 20:51:32.043446 kernel: acpiphp: Slot [18] registered Feb 13 20:51:32.043456 kernel: acpiphp: Slot [19] registered Feb 13 20:51:32.043466 kernel: acpiphp: Slot [20] registered Feb 13 20:51:32.043476 kernel: acpiphp: Slot [21] registered Feb 13 20:51:32.043486 kernel: acpiphp: Slot [22] registered Feb 13 20:51:32.043496 kernel: acpiphp: Slot [23] registered Feb 13 20:51:32.043506 kernel: acpiphp: Slot [24] registered Feb 13 20:51:32.043516 kernel: acpiphp: Slot [25] registered Feb 13 20:51:32.043526 kernel: acpiphp: Slot [26] registered Feb 13 20:51:32.043538 kernel: acpiphp: Slot [27] registered Feb 13 20:51:32.043548 kernel: acpiphp: Slot [28] registered Feb 13 20:51:32.043558 kernel: acpiphp: Slot [29] registered Feb 13 20:51:32.043567 kernel: acpiphp: Slot [30] registered Feb 13 20:51:32.043577 kernel: acpiphp: Slot [31] registered Feb 13 20:51:32.043587 kernel: PCI host bridge to bus 0000:00 Feb 13 20:51:32.043728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:51:32.043821 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:51:32.043918 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:51:32.044005 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 20:51:32.044092 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Feb 13 20:51:32.044178 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:51:32.044299 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:51:32.044409 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:51:32.044517 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:51:32.044669 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 13 20:51:32.044772 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:51:32.044870 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:51:32.044968 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:51:32.045066 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:51:32.045173 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:51:32.045279 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:51:32.045377 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:51:32.045483 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:51:32.045583 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:51:32.045725 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Feb 13 20:51:32.045825 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 13 20:51:32.045926 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 13 20:51:32.046033 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:51:32.046151 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:51:32.046249 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 13 20:51:32.046347 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 13 20:51:32.046444 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Feb 13 20:51:32.046542 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 13 20:51:32.052250 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:51:32.052365 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:51:32.052467 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 13 20:51:32.052577 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Feb 13 20:51:32.052718 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:51:32.052818 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 13 20:51:32.052914 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Feb 13 20:51:32.053020 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:51:32.053140 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 13 20:51:32.053240 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Feb 13 20:51:32.053339 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Feb 13 20:51:32.053354 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:51:32.053365 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:51:32.053376 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:51:32.053386 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:51:32.053396 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:51:32.053411 kernel: iommu: Default domain type: Translated Feb 13 20:51:32.053421 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:51:32.053432 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:51:32.053442 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:51:32.053452 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:51:32.053462 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Feb 13 20:51:32.053557 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:51:32.053695 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:51:32.053802 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:51:32.053817 kernel: vgaarb: loaded Feb 13 20:51:32.053828 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:51:32.053838 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:51:32.053849 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:51:32.053859 kernel: pnp: PnP ACPI init Feb 13 20:51:32.053959 kernel: pnp 00:03: [dma 2] Feb 13 20:51:32.053975 kernel: pnp: PnP ACPI: found 5 devices Feb 13 20:51:32.053986 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:51:32.054000 kernel: NET: Registered PF_INET protocol family Feb 13 20:51:32.054011 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:51:32.054021 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:51:32.054031 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:51:32.054041 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:51:32.054051 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:51:32.054061 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:51:32.054071 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:51:32.054082 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:51:32.054094 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:51:32.054105 kernel: NET: Registered PF_XDP protocol family Feb 13 20:51:32.054193 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:51:32.054294 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:51:32.054384 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:51:32.054472 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 13 20:51:32.054558 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Feb 13 20:51:32.054718 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:51:32.054827 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:51:32.054843 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:51:32.054853 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:51:32.054864 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Feb 13 20:51:32.054874 kernel: Initialise system trusted keyrings Feb 13 20:51:32.054884 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:51:32.054894 kernel: Key type asymmetric registered Feb 13 20:51:32.054904 kernel: Asymmetric key parser 'x509' registered Feb 13 20:51:32.054919 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:51:32.054929 kernel: io scheduler mq-deadline registered Feb 13 20:51:32.054939 kernel: io scheduler kyber registered Feb 13 20:51:32.054949 kernel: io scheduler bfq registered Feb 13 20:51:32.054960 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:51:32.054971 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:51:32.054981 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:51:32.054991 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:51:32.055001 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:51:32.055011 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:51:32.055024 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:51:32.055035 kernel: random: crng init done Feb 13 20:51:32.055045 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:51:32.055055 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:51:32.055065 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:51:32.055162 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 20:51:32.055179 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:51:32.055280 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 20:51:32.055376 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T20:51:31 UTC (1739479891) Feb 13 20:51:32.055464 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:51:32.055479 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 20:51:32.055489 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:51:32.055499 kernel: Segment Routing with IPv6 Feb 13 20:51:32.055509 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:51:32.055519 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:51:32.055529 kernel: Key type dns_resolver registered Feb 13 20:51:32.055543 kernel: IPI shorthand broadcast: enabled Feb 13 20:51:32.055554 kernel: sched_clock: Marking stable (1100008500, 190314688)->(1339779476, -49456288) Feb 13 20:51:32.055564 kernel: registered taskstats version 1 Feb 13 20:51:32.055574 kernel: Loading compiled-in X.509 certificates Feb 13 20:51:32.055584 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 20:51:32.055594 kernel: Key type .fscrypt registered Feb 13 20:51:32.055640 kernel: Key type fscrypt-provisioning registered Feb 13 20:51:32.055655 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:51:32.055670 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:51:32.055685 kernel: ima: No architecture policies found Feb 13 20:51:32.055694 kernel: clk: Disabling unused clocks Feb 13 20:51:32.055705 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 20:51:32.055715 kernel: Write protecting the kernel read-only data: 38912k Feb 13 20:51:32.055725 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 20:51:32.055735 kernel: Run /init as init process Feb 13 20:51:32.055745 kernel: with arguments: Feb 13 20:51:32.055755 kernel: /init Feb 13 20:51:32.055765 kernel: with environment: Feb 13 20:51:32.055778 kernel: HOME=/ Feb 13 20:51:32.055788 kernel: TERM=linux Feb 13 20:51:32.055797 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:51:32.055811 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:51:32.055825 systemd[1]: Detected virtualization kvm. Feb 13 20:51:32.055836 systemd[1]: Detected architecture x86-64. Feb 13 20:51:32.055847 systemd[1]: Running in initrd. Feb 13 20:51:32.055861 systemd[1]: No hostname configured, using default hostname. Feb 13 20:51:32.055872 systemd[1]: Hostname set to . Feb 13 20:51:32.055884 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:51:32.055894 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:51:32.055909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:51:32.055925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:51:32.055942 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:51:32.055973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:51:32.055988 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:51:32.056001 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:51:32.056016 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:51:32.056029 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:51:32.056042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:51:32.056057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:51:32.056070 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:51:32.056082 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:51:32.056095 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:51:32.056107 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:51:32.056120 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:51:32.056132 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:51:32.056145 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:51:32.056161 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:51:32.056174 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:51:32.056186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:51:32.056199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:51:32.056211 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:51:32.056224 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:51:32.056236 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:51:32.056252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:51:32.056269 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:51:32.056288 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:51:32.056300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:51:32.056313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:51:32.056325 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:51:32.056338 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:51:32.056374 systemd-journald[185]: Collecting audit messages is disabled. Feb 13 20:51:32.056408 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:51:32.056426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:51:32.056442 systemd-journald[185]: Journal started Feb 13 20:51:32.056472 systemd-journald[185]: Runtime Journal (/run/log/journal/1bcb6defdff04d86a50eae32ab120dca) is 8.0M, max 78.3M, 70.3M free. Feb 13 20:51:32.006353 systemd-modules-load[186]: Inserted module 'overlay' Feb 13 20:51:32.100790 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:51:32.100822 kernel: Bridge firewalling registered Feb 13 20:51:32.100836 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:51:32.061210 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 13 20:51:32.101570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:51:32.102523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:51:32.103784 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:51:32.112900 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:51:32.114976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:51:32.118976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:51:32.121881 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:51:32.136634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:51:32.141979 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:51:32.146571 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:51:32.147271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:51:32.152578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:51:32.161622 dracut-cmdline[217]: dracut-dracut-053 Feb 13 20:51:32.164505 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 20:51:32.163948 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:51:32.204350 systemd-resolved[227]: Positive Trust Anchors: Feb 13 20:51:32.204367 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:51:32.204412 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:51:32.208958 systemd-resolved[227]: Defaulting to hostname 'linux'. Feb 13 20:51:32.212648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:51:32.213538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:51:32.233645 kernel: SCSI subsystem initialized Feb 13 20:51:32.243703 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:51:32.256688 kernel: iscsi: registered transport (tcp) Feb 13 20:51:32.299407 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:51:32.299542 kernel: QLogic iSCSI HBA Driver Feb 13 20:51:32.395026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:51:32.403894 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:51:32.477952 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:51:32.478058 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:51:32.481404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:51:32.545720 kernel: raid6: sse2x4 gen() 5156 MB/s Feb 13 20:51:32.564789 kernel: raid6: sse2x2 gen() 5910 MB/s Feb 13 20:51:32.583098 kernel: raid6: sse2x1 gen() 8217 MB/s Feb 13 20:51:32.583188 kernel: raid6: using algorithm sse2x1 gen() 8217 MB/s Feb 13 20:51:32.602246 kernel: raid6: .... xor() 7187 MB/s, rmw enabled Feb 13 20:51:32.602339 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 20:51:32.626209 kernel: xor: measuring software checksum speed Feb 13 20:51:32.626281 kernel: prefetch64-sse : 18091 MB/sec Feb 13 20:51:32.626716 kernel: generic_sse : 16867 MB/sec Feb 13 20:51:32.627861 kernel: xor: using function: prefetch64-sse (18091 MB/sec) Feb 13 20:51:32.806668 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:51:32.824378 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:51:32.836921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:51:32.849330 systemd-udevd[404]: Using default interface naming scheme 'v255'. Feb 13 20:51:32.853819 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:51:32.862890 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:51:32.890070 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Feb 13 20:51:32.934267 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:51:32.942854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:51:33.017331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:51:33.027892 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:51:33.062065 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:51:33.067876 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:51:33.072243 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:51:33.074256 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:51:33.086242 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:51:33.118051 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Feb 13 20:51:33.159992 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Feb 13 20:51:33.160131 kernel: libata version 3.00 loaded. Feb 13 20:51:33.160148 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:51:33.160284 kernel: scsi host0: ata_piix Feb 13 20:51:33.160405 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:51:33.160424 kernel: scsi host1: ata_piix Feb 13 20:51:33.160537 kernel: GPT:17805311 != 20971519 Feb 13 20:51:33.160566 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 13 20:51:33.160580 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:51:33.160596 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 13 20:51:33.160631 kernel: GPT:17805311 != 20971519 Feb 13 20:51:33.160643 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:51:33.160655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:51:33.118967 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:51:33.154039 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:51:33.154176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:51:33.159638 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:51:33.161277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:51:33.161488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:51:33.162592 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:51:33.169985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:51:33.224424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:51:33.231899 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:51:33.252994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:51:33.356724 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (453) Feb 13 20:51:33.368674 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (454) Feb 13 20:51:33.385054 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:51:33.390207 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:51:33.390901 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:51:33.403481 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:51:33.411564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:51:33.422771 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:51:33.449763 disk-uuid[514]: Primary Header is updated. Feb 13 20:51:33.449763 disk-uuid[514]: Secondary Entries is updated. Feb 13 20:51:33.449763 disk-uuid[514]: Secondary Header is updated. Feb 13 20:51:33.459324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:51:34.474675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:51:34.475716 disk-uuid[515]: The operation has completed successfully. Feb 13 20:51:34.552806 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:51:34.553073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:51:34.585738 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:51:34.590017 sh[526]: Success Feb 13 20:51:34.609928 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 13 20:51:34.709767 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:51:34.719826 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:51:34.726081 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:51:34.774285 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 20:51:34.774378 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:51:34.777660 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:51:34.783973 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:51:34.787736 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:51:34.809240 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:51:34.811975 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:51:34.818958 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:51:34.823876 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:51:34.851192 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 20:51:34.851276 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:51:34.855671 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:51:34.870777 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:51:34.894337 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:51:34.901640 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 20:51:34.915976 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:51:34.924179 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:51:35.043064 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:51:35.051881 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:51:35.087086 ignition[622]: Ignition 2.20.0 Feb 13 20:51:35.087902 ignition[622]: Stage: fetch-offline Feb 13 20:51:35.088423 ignition[622]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:51:35.088436 ignition[622]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:51:35.088530 ignition[622]: parsed url from cmdline: "" Feb 13 20:51:35.088534 ignition[622]: no config URL provided Feb 13 20:51:35.088540 ignition[622]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:51:35.088565 ignition[622]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:51:35.092465 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:51:35.088572 ignition[622]: failed to fetch config: resource requires networking Feb 13 20:51:35.091046 ignition[622]: Ignition finished successfully Feb 13 20:51:35.097107 systemd-networkd[710]: lo: Link UP Feb 13 20:51:35.097111 systemd-networkd[710]: lo: Gained carrier Feb 13 20:51:35.098386 systemd-networkd[710]: Enumeration completed Feb 13 20:51:35.098689 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:51:35.098813 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:51:35.098817 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:51:35.099888 systemd-networkd[710]: eth0: Link UP Feb 13 20:51:35.099893 systemd-networkd[710]: eth0: Gained carrier Feb 13 20:51:35.099901 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:51:35.100185 systemd[1]: Reached target network.target - Network. Feb 13 20:51:35.105926 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:51:35.110655 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.171/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 13 20:51:35.123475 ignition[717]: Ignition 2.20.0 Feb 13 20:51:35.123488 ignition[717]: Stage: fetch Feb 13 20:51:35.123696 ignition[717]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:51:35.123707 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:51:35.123799 ignition[717]: parsed url from cmdline: "" Feb 13 20:51:35.123803 ignition[717]: no config URL provided Feb 13 20:51:35.123809 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:51:35.123817 ignition[717]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:51:35.123895 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 20:51:35.124974 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 20:51:35.124996 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 20:51:35.330172 ignition[717]: GET result: OK Feb 13 20:51:35.330286 ignition[717]: parsing config with SHA512: 0fa2f5076d28a4c56100123e2549faa025ab1b56621d55b15ff2f3dfee870e1410359a9de644f3830dfeb5752c0edf3a48f75dc44d2c5e6c176a58bbedf2af69 Feb 13 20:51:35.337717 unknown[717]: fetched base config from "system" Feb 13 20:51:35.337744 unknown[717]: fetched base config from "system" Feb 13 20:51:35.338387 ignition[717]: fetch: fetch complete Feb 13 20:51:35.337758 unknown[717]: fetched user config from "openstack" Feb 13 20:51:35.338400 ignition[717]: fetch: fetch passed Feb 13 20:51:35.342005 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:51:35.338486 ignition[717]: Ignition finished successfully Feb 13 20:51:35.342195 systemd-resolved[227]: Detected conflict on linux IN A 172.24.4.171 Feb 13 20:51:35.342214 systemd-resolved[227]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 20:51:35.352040 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:51:35.388900 ignition[725]: Ignition 2.20.0 Feb 13 20:51:35.388934 ignition[725]: Stage: kargs Feb 13 20:51:35.389333 ignition[725]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:51:35.389358 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:51:35.393574 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:51:35.391153 ignition[725]: kargs: kargs passed Feb 13 20:51:35.391252 ignition[725]: Ignition finished successfully Feb 13 20:51:35.409517 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:51:35.436707 ignition[731]: Ignition 2.20.0 Feb 13 20:51:35.436727 ignition[731]: Stage: disks Feb 13 20:51:35.437269 ignition[731]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:51:35.437296 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:51:35.441245 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:51:35.439107 ignition[731]: disks: disks passed Feb 13 20:51:35.445307 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:51:35.439204 ignition[731]: Ignition finished successfully Feb 13 20:51:35.447274 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:51:35.449831 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:51:35.452827 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:51:35.455265 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:51:35.470961 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:51:35.503498 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:51:35.516338 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:51:35.525890 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:51:35.683675 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 20:51:35.684864 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:51:35.687153 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:51:35.694704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:51:35.700796 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:51:35.702935 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:51:35.709810 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 20:51:35.711398 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:51:35.711434 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:51:35.715482 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:51:35.724646 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (747) Feb 13 20:51:35.740999 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:51:35.747857 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 20:51:35.747884 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:51:35.747897 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:51:35.754634 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:51:35.762206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:51:35.834535 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:51:35.846968 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:51:35.853738 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:51:35.865069 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:51:35.991906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:51:36.001750 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:51:36.005827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:51:36.013195 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:51:36.014965 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 20:51:36.051228 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:51:36.057484 ignition[863]: INFO : Ignition 2.20.0 Feb 13 20:51:36.059080 ignition[863]: INFO : Stage: mount Feb 13 20:51:36.060465 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:51:36.060465 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:51:36.064489 ignition[863]: INFO : mount: mount passed Feb 13 20:51:36.065085 ignition[863]: INFO : Ignition finished successfully Feb 13 20:51:36.066394 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:51:36.926828 systemd-networkd[710]: eth0: Gained IPv6LL Feb 13 20:51:42.947500 coreos-metadata[749]: Feb 13 20:51:42.947 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:51:42.991027 coreos-metadata[749]: Feb 13 20:51:42.990 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 20:51:43.010309 coreos-metadata[749]: Feb 13 20:51:43.010 INFO Fetch successful Feb 13 20:51:43.011908 coreos-metadata[749]: Feb 13 20:51:43.011 INFO wrote hostname ci-4186-1-1-d-ccac17ed2f.novalocal to /sysroot/etc/hostname Feb 13 20:51:43.015453 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 20:51:43.015727 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 20:51:43.026860 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:51:43.061058 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:51:43.083723 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) Feb 13 20:51:43.091385 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 20:51:43.091481 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:51:43.095703 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:51:43.106748 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:51:43.112903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:51:43.160452 ignition[898]: INFO : Ignition 2.20.0 Feb 13 20:51:43.160452 ignition[898]: INFO : Stage: files Feb 13 20:51:43.164181 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:51:43.164181 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:51:43.164181 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:51:43.169841 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:51:43.169841 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:51:43.173845 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:51:43.173845 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:51:43.173845 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:51:43.172767 unknown[898]: wrote ssh authorized keys file for user: core Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:51:43.179586 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:51:43.617190 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 20:51:45.236444 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:51:45.240486 ignition[898]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:51:45.240486 ignition[898]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:51:45.240486 ignition[898]: INFO : files: files passed Feb 13 20:51:45.240486 ignition[898]: INFO : Ignition finished successfully Feb 13 20:51:45.238906 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:51:45.248828 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:51:45.254750 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:51:45.257396 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:51:45.257558 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:51:45.268830 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:51:45.268830 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:51:45.274230 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:51:45.274596 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:51:45.278961 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:51:45.286033 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:51:45.320476 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:51:45.320851 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:51:45.323025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:51:45.324569 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:51:45.326557 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:51:45.341922 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:51:45.359553 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:51:45.375903 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:51:45.397476 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:51:45.399268 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:51:45.402482 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:51:45.405385 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:51:45.405715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:51:45.409031 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:51:45.410891 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:51:45.413871 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:51:45.416271 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:51:45.428364 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:51:45.431314 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:51:45.434229 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:51:45.437335 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:51:45.440152 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:51:45.443166 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:51:45.445942 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:51:45.446232 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:51:45.449813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:51:45.452881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:51:45.455717 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:51:45.455959 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:51:45.458805 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:51:45.459075 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:51:45.462848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:51:45.463151 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:51:45.465083 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:51:45.465343 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:51:45.476166 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:51:45.480502 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:51:45.481041 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:51:45.488508 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:51:45.491959 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:51:45.494132 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:51:45.504186 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:51:45.504564 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:51:45.516095 ignition[952]: INFO : Ignition 2.20.0 Feb 13 20:51:45.516095 ignition[952]: INFO : Stage: umount Feb 13 20:51:45.516095 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:51:45.516095 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:51:45.514987 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:51:45.521085 ignition[952]: INFO : umount: umount passed Feb 13 20:51:45.521085 ignition[952]: INFO : Ignition finished successfully Feb 13 20:51:45.515076 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:51:45.517966 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:51:45.518322 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:51:45.520254 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:51:45.520325 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:51:45.522574 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:51:45.522631 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:51:45.523806 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:51:45.523844 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:51:45.524463 systemd[1]: Stopped target network.target - Network. Feb 13 20:51:45.524948 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:51:45.525008 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:51:45.525553 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:51:45.527724 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:51:45.532003 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:51:45.533496 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:51:45.535688 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:51:45.536371 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:51:45.536406 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:51:45.536908 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:51:45.536941 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:51:45.537552 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:51:45.537595 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:51:45.541738 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:51:45.541784 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:51:45.542996 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:51:45.545041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:51:45.546868 systemd-networkd[710]: eth0: DHCPv6 lease lost Feb 13 20:51:45.547395 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:51:45.548059 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:51:45.548166 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:51:45.551588 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:51:45.551667 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:51:45.559699 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:51:45.560193 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:51:45.560240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:51:45.560918 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:51:45.563888 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:51:45.563983 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:51:45.574087 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:51:45.574235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:51:45.576966 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:51:45.577832 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:51:45.579170 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:51:45.579243 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:51:45.580004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:51:45.580041 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:51:45.581290 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:51:45.581341 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:51:45.583460 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:51:45.583533 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:51:45.585079 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:51:45.585154 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:51:45.593779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:51:45.594308 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:51:45.594366 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:51:45.594895 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:51:45.594937 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:51:45.602657 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:51:45.602709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:51:45.603267 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:51:45.603308 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:51:45.604932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:51:45.605030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:51:45.607192 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:51:45.607337 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:51:45.824371 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:51:45.824721 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:51:45.828169 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:51:45.830141 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:51:45.830261 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:51:45.839905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:51:45.872751 systemd[1]: Switching root. Feb 13 20:51:45.935916 systemd-journald[185]: Journal stopped Feb 13 20:51:47.547977 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 13 20:51:47.548058 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:51:47.548082 kernel: SELinux: policy capability open_perms=1 Feb 13 20:51:47.548095 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:51:47.548108 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:51:47.548124 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:51:47.548138 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:51:47.548151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:51:47.548163 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:51:47.548177 kernel: audit: type=1403 audit(1739479906.388:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:51:47.548194 systemd[1]: Successfully loaded SELinux policy in 72.230ms. Feb 13 20:51:47.548218 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.544ms. Feb 13 20:51:47.548233 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:51:47.548247 systemd[1]: Detected virtualization kvm. Feb 13 20:51:47.548261 systemd[1]: Detected architecture x86-64. Feb 13 20:51:47.548274 systemd[1]: Detected first boot. Feb 13 20:51:47.548288 systemd[1]: Hostname set to . Feb 13 20:51:47.548304 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:51:47.548317 zram_generator::config[997]: No configuration found. Feb 13 20:51:47.548332 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:51:47.548345 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:51:47.548360 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:51:47.548373 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:51:47.548390 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:51:47.548404 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:51:47.548421 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:51:47.548434 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:51:47.548448 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:51:47.548462 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:51:47.548475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:51:47.548488 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:51:47.548504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:51:47.548532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:51:47.548548 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:51:47.548561 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:51:47.548575 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:51:47.548588 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:51:47.550864 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:51:47.550886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:51:47.550899 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:51:47.550917 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:51:47.550930 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:51:47.550943 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:51:47.550955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:51:47.550968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:51:47.550980 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:51:47.550993 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:51:47.551008 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:51:47.551020 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:51:47.551033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:51:47.551045 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:51:47.551058 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:51:47.551070 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:51:47.551083 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:51:47.551095 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:51:47.551107 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:51:47.551122 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:51:47.551135 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:51:47.551147 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:51:47.551162 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:51:47.551176 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:51:47.551189 systemd[1]: Reached target machines.target - Containers. Feb 13 20:51:47.551201 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:51:47.551214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:51:47.551229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:51:47.551241 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:51:47.551253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:51:47.551266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:51:47.551278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:51:47.551290 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:51:47.551303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:51:47.551316 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:51:47.551331 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:51:47.551344 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:51:47.551356 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:51:47.551369 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:51:47.551381 kernel: loop: module loaded Feb 13 20:51:47.551392 kernel: fuse: init (API version 7.39) Feb 13 20:51:47.551404 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:51:47.551417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:51:47.551431 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:51:47.551443 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:51:47.551458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:51:47.551471 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:51:47.551483 systemd[1]: Stopped verity-setup.service. Feb 13 20:51:47.551495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:51:47.551507 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:51:47.551520 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:51:47.551537 kernel: ACPI: bus type drm_connector registered Feb 13 20:51:47.551549 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:51:47.551585 systemd-journald[1090]: Collecting audit messages is disabled. Feb 13 20:51:47.551627 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:51:47.551641 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:51:47.551656 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:51:47.551672 systemd-journald[1090]: Journal started Feb 13 20:51:47.551698 systemd-journald[1090]: Runtime Journal (/run/log/journal/1bcb6defdff04d86a50eae32ab120dca) is 8.0M, max 78.3M, 70.3M free. Feb 13 20:51:47.132660 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:51:47.157144 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:51:47.157590 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:51:47.555291 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:51:47.556811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:51:47.559157 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:51:47.559345 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:51:47.560362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:51:47.560550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:51:47.561722 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:51:47.562718 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:51:47.563811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:51:47.564010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:51:47.565058 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:51:47.565206 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:51:47.567258 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:51:47.567459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:51:47.568515 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:51:47.570851 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:51:47.573090 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:51:47.574021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:51:47.585587 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:51:47.593752 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:51:47.598430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:51:47.599712 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:51:47.599752 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:51:47.602034 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:51:47.608778 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:51:47.615764 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:51:47.617718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:51:47.625784 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:51:47.629215 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:51:47.630193 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:51:47.632718 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:51:47.633318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:51:47.639901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:51:47.642179 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:51:47.647747 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:51:47.650414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:51:47.651182 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:51:47.651830 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:51:47.654649 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:51:47.666648 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:51:47.674268 systemd-journald[1090]: Time spent on flushing to /var/log/journal/1bcb6defdff04d86a50eae32ab120dca is 33.360ms for 931 entries. Feb 13 20:51:47.674268 systemd-journald[1090]: System Journal (/var/log/journal/1bcb6defdff04d86a50eae32ab120dca) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:51:47.752193 systemd-journald[1090]: Received client request to flush runtime journal. Feb 13 20:51:47.752235 kernel: loop0: detected capacity change from 0 to 8 Feb 13 20:51:47.752266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:51:47.706454 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:51:47.722831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:51:47.731792 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:51:47.738077 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:51:47.748788 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:51:47.754821 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:51:47.765745 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 20:51:47.814361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:51:47.815265 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:51:47.836508 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:51:47.844774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:51:47.886629 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Feb 13 20:51:47.886648 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Feb 13 20:51:47.894669 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 20:51:47.896247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:51:47.964636 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 20:51:48.076651 kernel: loop4: detected capacity change from 0 to 8 Feb 13 20:51:48.080641 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 20:51:48.148723 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 20:51:48.188642 kernel: loop7: detected capacity change from 0 to 141000 Feb 13 20:51:48.238956 (sd-merge)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 20:51:48.239965 (sd-merge)[1155]: Merged extensions into '/usr'. Feb 13 20:51:48.247973 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:51:48.248083 systemd[1]: Reloading... Feb 13 20:51:48.368667 zram_generator::config[1187]: No configuration found. Feb 13 20:51:48.586182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:51:48.629297 ldconfig[1125]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:51:48.647846 systemd[1]: Reloading finished in 399 ms. Feb 13 20:51:48.675758 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:51:48.676707 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:51:48.677508 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:51:48.688778 systemd[1]: Starting ensure-sysext.service... Feb 13 20:51:48.690786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:51:48.694257 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:51:48.715742 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:51:48.715761 systemd[1]: Reloading... Feb 13 20:51:48.730720 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:51:48.731389 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:51:48.734031 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:51:48.734830 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:51:48.734983 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:51:48.743548 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:51:48.743556 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:51:48.755126 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:51:48.755490 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:51:48.773979 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Feb 13 20:51:48.821668 zram_generator::config[1270]: No configuration found. Feb 13 20:51:48.920662 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1284) Feb 13 20:51:49.027955 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:51:49.046255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:51:49.065992 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:51:49.074680 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:51:49.080631 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:51:49.089622 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:51:49.117787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:51:49.118581 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:51:49.118749 systemd[1]: Reloading finished in 402 ms. Feb 13 20:51:49.134197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:51:49.142186 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:51:49.174410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:51:49.181835 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:51:49.215833 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:51:49.216653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:51:49.222401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:51:49.226774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:51:49.240386 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:51:49.240502 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:51:49.238982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:51:49.240739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:51:49.241899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:51:49.248863 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:51:49.248842 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:51:49.257403 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:51:49.257472 kernel: [drm] features: -context_init Feb 13 20:51:49.258766 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:51:49.260818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:51:49.267888 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:51:49.276102 kernel: [drm] number of scanouts: 1 Feb 13 20:51:49.270822 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:51:49.272801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:51:49.272885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:51:49.273803 systemd[1]: Finished ensure-sysext.service. Feb 13 20:51:49.275470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:51:49.275651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:51:49.275972 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:51:49.276699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:51:49.277030 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:51:49.277150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:51:49.282535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:51:49.283821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:51:49.288729 kernel: [drm] number of cap sets: 0 Feb 13 20:51:49.293132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:51:49.293310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:51:49.302204 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:51:49.302945 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:51:49.307839 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:51:49.313910 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:51:49.325681 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 20:51:49.331218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:51:49.331430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:51:49.336964 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:51:49.347706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:51:49.359976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:51:49.362718 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:51:49.374881 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:51:49.379847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:51:49.405692 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:51:49.452690 lvm[1389]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:51:49.492689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:51:49.497070 augenrules[1410]: No rules Feb 13 20:51:49.503354 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:51:49.505127 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:51:49.506069 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:51:49.507169 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:51:49.510050 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:51:49.520781 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:51:49.540266 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:51:49.551445 systemd-networkd[1363]: lo: Link UP Feb 13 20:51:49.551455 systemd-networkd[1363]: lo: Gained carrier Feb 13 20:51:49.554866 systemd-networkd[1363]: Enumeration completed Feb 13 20:51:49.555009 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:51:49.557777 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:51:49.557782 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:51:49.558460 systemd-networkd[1363]: eth0: Link UP Feb 13 20:51:49.558464 systemd-networkd[1363]: eth0: Gained carrier Feb 13 20:51:49.558479 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:51:49.561871 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:51:49.569402 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:51:49.574074 systemd-networkd[1363]: eth0: DHCPv4 address 172.24.4.171/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 13 20:51:49.600637 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:51:49.605150 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:51:49.605544 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:51:49.609794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:51:49.622138 systemd-resolved[1364]: Positive Trust Anchors: Feb 13 20:51:49.622159 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:51:49.622212 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:51:49.627240 systemd-resolved[1364]: Using system hostname 'ci-4186-1-1-d-ccac17ed2f.novalocal'. Feb 13 20:51:49.629693 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:51:49.630880 systemd[1]: Reached target network.target - Network. Feb 13 20:51:49.630943 systemd-timesyncd[1373]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 13 20:51:49.630954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:51:49.630989 systemd-timesyncd[1373]: Initial clock synchronization to Thu 2025-02-13 20:51:49.732268 UTC. Feb 13 20:51:49.647657 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:51:49.650790 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:51:49.650850 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:51:49.654458 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:51:49.657670 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:51:49.661316 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:51:49.664921 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:51:49.668348 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:51:49.671224 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:51:49.671263 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:51:49.674093 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:51:49.678418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:51:49.682355 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:51:49.690135 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:51:49.697524 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:51:49.698705 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:51:49.699411 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:51:49.700093 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:51:49.700132 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:51:49.709120 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:51:49.718014 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:51:49.728070 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:51:49.733715 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:51:49.749913 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:51:49.753046 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:51:49.756126 jq[1436]: false Feb 13 20:51:49.757925 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:51:49.762218 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:51:49.767154 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:51:49.774797 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:51:49.775917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:51:49.776475 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:51:49.787454 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:51:49.797239 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:51:49.803312 extend-filesystems[1437]: Found loop4 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found loop5 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found loop6 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found loop7 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda1 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda2 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda3 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found usr Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda4 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda6 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda7 Feb 13 20:51:49.833769 extend-filesystems[1437]: Found vda9 Feb 13 20:51:49.833769 extend-filesystems[1437]: Checking size of /dev/vda9 Feb 13 20:51:49.809699 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:51:49.806892 dbus-daemon[1435]: [system] SELinux support is enabled Feb 13 20:51:49.833542 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:51:49.835821 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:51:49.836164 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:51:49.901755 jq[1444]: true Feb 13 20:51:49.836335 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:51:49.901924 update_engine[1443]: I20250213 20:51:49.840507 1443 main.cc:92] Flatcar Update Engine starting Feb 13 20:51:49.901924 update_engine[1443]: I20250213 20:51:49.872678 1443 update_check_scheduler.cc:74] Next update check in 2m18s Feb 13 20:51:49.866132 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:51:49.866160 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:51:49.903466 jq[1454]: true Feb 13 20:51:49.875112 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:51:49.875139 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:51:49.887362 systemd-logind[1442]: New seat seat0. Feb 13 20:51:49.891411 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:51:49.893052 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:51:49.893070 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:51:49.895428 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:51:49.910089 extend-filesystems[1437]: Resized partition /dev/vda9 Feb 13 20:51:49.913485 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:51:49.915113 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:51:49.923189 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Feb 13 20:51:49.919246 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:51:49.919438 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:51:49.930313 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Feb 13 20:51:49.936676 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1275) Feb 13 20:51:49.945291 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:51:50.071309 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:51:50.071309 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:51:50.071309 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Feb 13 20:51:50.095874 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Feb 13 20:51:50.081741 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:51:50.081988 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:51:50.123019 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:51:50.124649 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:51:50.130910 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:51:50.140428 systemd[1]: Starting sshkeys.service... Feb 13 20:51:50.175167 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:51:50.183987 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:51:50.307540 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:51:50.333545 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:51:50.344091 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:51:50.356704 containerd[1459]: time="2025-02-13T20:51:50.356577553Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 20:51:50.360143 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:51:50.360314 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:51:50.374069 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:51:50.392171 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:51:50.398902 containerd[1459]: time="2025-02-13T20:51:50.398695885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:51:50.400315 containerd[1459]: time="2025-02-13T20:51:50.400283450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:51:50.400666 containerd[1459]: time="2025-02-13T20:51:50.400370298Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:51:50.400666 containerd[1459]: time="2025-02-13T20:51:50.400393522Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:51:50.400666 containerd[1459]: time="2025-02-13T20:51:50.400555591Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:51:50.400666 containerd[1459]: time="2025-02-13T20:51:50.400575802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:51:50.400850 containerd[1459]: time="2025-02-13T20:51:50.400826626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:51:50.400911 containerd[1459]: time="2025-02-13T20:51:50.400897607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401138 containerd[1459]: time="2025-02-13T20:51:50.401115204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401215 containerd[1459]: time="2025-02-13T20:51:50.401199962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401667 containerd[1459]: time="2025-02-13T20:51:50.401266762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401667 containerd[1459]: time="2025-02-13T20:51:50.401284091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401667 containerd[1459]: time="2025-02-13T20:51:50.401368209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401667 containerd[1459]: time="2025-02-13T20:51:50.401605530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401902 containerd[1459]: time="2025-02-13T20:51:50.401880786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:51:50.401963 containerd[1459]: time="2025-02-13T20:51:50.401950183Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:51:50.402101 containerd[1459]: time="2025-02-13T20:51:50.402083418Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:51:50.402211 containerd[1459]: time="2025-02-13T20:51:50.402193947Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:51:50.402805 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:51:50.417228 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:51:50.419775 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:51:50.426914 containerd[1459]: time="2025-02-13T20:51:50.426850669Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:51:50.426987 containerd[1459]: time="2025-02-13T20:51:50.426925585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:51:50.426987 containerd[1459]: time="2025-02-13T20:51:50.426945653Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:51:50.426987 containerd[1459]: time="2025-02-13T20:51:50.426969394Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:51:50.427078 containerd[1459]: time="2025-02-13T20:51:50.426988174Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:51:50.427211 containerd[1459]: time="2025-02-13T20:51:50.427166812Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:51:50.427440 containerd[1459]: time="2025-02-13T20:51:50.427417210Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:51:50.427549 containerd[1459]: time="2025-02-13T20:51:50.427527495Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:51:50.427626 containerd[1459]: time="2025-02-13T20:51:50.427553042Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:51:50.427626 containerd[1459]: time="2025-02-13T20:51:50.427570939Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:51:50.427626 containerd[1459]: time="2025-02-13T20:51:50.427585975Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427626 containerd[1459]: time="2025-02-13T20:51:50.427601438Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427754 containerd[1459]: time="2025-02-13T20:51:50.427640458Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427754 containerd[1459]: time="2025-02-13T20:51:50.427658294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427754 containerd[1459]: time="2025-02-13T20:51:50.427681041Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427754 containerd[1459]: time="2025-02-13T20:51:50.427696859Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427754 containerd[1459]: time="2025-02-13T20:51:50.427712919Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427754 containerd[1459]: time="2025-02-13T20:51:50.427725734Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:51:50.427754 containerd[1459]: time="2025-02-13T20:51:50.427748654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427764359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427779395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427794594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427808706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427823620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427836871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427852435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427868719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427886352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.427910 containerd[1459]: time="2025-02-13T20:51:50.427900110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.428176 containerd[1459]: time="2025-02-13T20:51:50.427914821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.428176 containerd[1459]: time="2025-02-13T20:51:50.427930547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.428176 containerd[1459]: time="2025-02-13T20:51:50.427949185Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:51:50.428176 containerd[1459]: time="2025-02-13T20:51:50.427972318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.428176 containerd[1459]: time="2025-02-13T20:51:50.427987851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.428176 containerd[1459]: time="2025-02-13T20:51:50.428000543Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428708761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428738640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428812654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428832601Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428844491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428859853Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428871500Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:51:50.429182 containerd[1459]: time="2025-02-13T20:51:50.428883756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:51:50.429653 containerd[1459]: time="2025-02-13T20:51:50.429568110Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:51:50.429795 containerd[1459]: time="2025-02-13T20:51:50.429657962Z" level=info msg="Connect containerd service" Feb 13 20:51:50.429795 containerd[1459]: time="2025-02-13T20:51:50.429691240Z" level=info msg="using legacy CRI server" Feb 13 20:51:50.429795 containerd[1459]: time="2025-02-13T20:51:50.429699326Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:51:50.429874 containerd[1459]: time="2025-02-13T20:51:50.429805604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:51:50.430564 containerd[1459]: time="2025-02-13T20:51:50.430531699Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:51:50.430681 containerd[1459]: time="2025-02-13T20:51:50.430644013Z" level=info msg="Start subscribing containerd event" Feb 13 20:51:50.430726 containerd[1459]: time="2025-02-13T20:51:50.430693361Z" level=info msg="Start recovering state" Feb 13 20:51:50.430769 containerd[1459]: time="2025-02-13T20:51:50.430749671Z" level=info msg="Start event monitor" Feb 13 20:51:50.430800 containerd[1459]: time="2025-02-13T20:51:50.430776781Z" level=info msg="Start snapshots syncer" Feb 13 20:51:50.430800 containerd[1459]: time="2025-02-13T20:51:50.430788012Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:51:50.430800 containerd[1459]: time="2025-02-13T20:51:50.430796291Z" level=info msg="Start streaming server" Feb 13 20:51:50.431266 containerd[1459]: time="2025-02-13T20:51:50.431243508Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:51:50.431336 containerd[1459]: time="2025-02-13T20:51:50.431311535Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:51:50.432076 containerd[1459]: time="2025-02-13T20:51:50.432042044Z" level=info msg="containerd successfully booted in 0.076409s" Feb 13 20:51:50.438392 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:51:50.444288 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:51:50.454024 systemd[1]: Started sshd@0-172.24.4.171:22-172.24.4.1:39354.service - OpenSSH per-connection server daemon (172.24.4.1:39354). Feb 13 20:51:50.623261 systemd-networkd[1363]: eth0: Gained IPv6LL Feb 13 20:51:50.628821 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:51:50.633475 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:51:50.650200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:51:50.656858 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:51:50.706430 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:51:52.269936 sshd[1522]: Accepted publickey for core from 172.24.4.1 port 39354 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:51:52.274694 sshd-session[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:52.297447 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:51:52.307136 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:51:52.315085 systemd-logind[1442]: New session 1 of user core. Feb 13 20:51:52.333835 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:51:52.352165 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:51:52.373775 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:51:52.528594 systemd[1539]: Queued start job for default target default.target. Feb 13 20:51:52.536570 systemd[1539]: Created slice app.slice - User Application Slice. Feb 13 20:51:52.536732 systemd[1539]: Reached target paths.target - Paths. Feb 13 20:51:52.536837 systemd[1539]: Reached target timers.target - Timers. Feb 13 20:51:52.539359 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:51:52.572057 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:51:52.572176 systemd[1539]: Reached target sockets.target - Sockets. Feb 13 20:51:52.572193 systemd[1539]: Reached target basic.target - Basic System. Feb 13 20:51:52.572231 systemd[1539]: Reached target default.target - Main User Target. Feb 13 20:51:52.572263 systemd[1539]: Startup finished in 183ms. Feb 13 20:51:52.572816 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:51:52.583938 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:51:53.015992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:51:53.028261 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:51:53.063183 systemd[1]: Started sshd@1-172.24.4.171:22-172.24.4.1:39360.service - OpenSSH per-connection server daemon (172.24.4.1:39360). Feb 13 20:51:54.576325 sshd[1557]: Accepted publickey for core from 172.24.4.1 port 39360 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:51:54.578129 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:54.589166 systemd-logind[1442]: New session 2 of user core. Feb 13 20:51:54.601993 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:51:54.807565 kubelet[1555]: E0213 20:51:54.807379 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:51:54.812851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:51:54.813179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:51:54.813950 systemd[1]: kubelet.service: Consumed 2.016s CPU time. Feb 13 20:51:55.349441 sshd[1565]: Connection closed by 172.24.4.1 port 39360 Feb 13 20:51:55.352024 sshd-session[1557]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:55.374509 systemd[1]: sshd@1-172.24.4.171:22-172.24.4.1:39360.service: Deactivated successfully. Feb 13 20:51:55.378169 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:51:55.380502 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:51:55.395406 systemd[1]: Started sshd@2-172.24.4.171:22-172.24.4.1:57698.service - OpenSSH per-connection server daemon (172.24.4.1:57698). Feb 13 20:51:55.405672 systemd-logind[1442]: Removed session 2. Feb 13 20:51:55.454439 agetty[1519]: failed to open credentials directory Feb 13 20:51:55.454932 agetty[1517]: failed to open credentials directory Feb 13 20:51:55.476663 login[1517]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:51:55.477743 login[1519]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:51:55.483502 systemd-logind[1442]: New session 3 of user core. Feb 13 20:51:55.490853 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:51:55.495260 systemd-logind[1442]: New session 4 of user core. Feb 13 20:51:55.502860 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:51:56.541579 sshd[1571]: Accepted publickey for core from 172.24.4.1 port 57698 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:51:56.546090 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:56.559968 systemd-logind[1442]: New session 5 of user core. Feb 13 20:51:56.568791 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:51:56.791081 coreos-metadata[1432]: Feb 13 20:51:56.790 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:51:56.839584 coreos-metadata[1432]: Feb 13 20:51:56.839 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 20:51:57.041014 coreos-metadata[1432]: Feb 13 20:51:57.040 INFO Fetch successful Feb 13 20:51:57.041014 coreos-metadata[1432]: Feb 13 20:51:57.040 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 20:51:57.051915 coreos-metadata[1432]: Feb 13 20:51:57.051 INFO Fetch successful Feb 13 20:51:57.051915 coreos-metadata[1432]: Feb 13 20:51:57.051 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 20:51:57.066349 coreos-metadata[1432]: Feb 13 20:51:57.066 INFO Fetch successful Feb 13 20:51:57.066349 coreos-metadata[1432]: Feb 13 20:51:57.066 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 20:51:57.081707 coreos-metadata[1432]: Feb 13 20:51:57.081 INFO Fetch successful Feb 13 20:51:57.081707 coreos-metadata[1432]: Feb 13 20:51:57.081 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 20:51:57.092741 coreos-metadata[1432]: Feb 13 20:51:57.092 INFO Fetch successful Feb 13 20:51:57.092741 coreos-metadata[1432]: Feb 13 20:51:57.092 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 20:51:57.104074 coreos-metadata[1432]: Feb 13 20:51:57.103 INFO Fetch successful Feb 13 20:51:57.161733 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:51:57.163833 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:51:57.171658 sshd[1600]: Connection closed by 172.24.4.1 port 57698 Feb 13 20:51:57.172515 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:57.179883 systemd[1]: sshd@2-172.24.4.171:22-172.24.4.1:57698.service: Deactivated successfully. Feb 13 20:51:57.184266 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:51:57.186284 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:51:57.189008 systemd-logind[1442]: Removed session 5. Feb 13 20:51:57.294479 coreos-metadata[1494]: Feb 13 20:51:57.294 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:51:57.338495 coreos-metadata[1494]: Feb 13 20:51:57.338 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 20:51:57.355387 coreos-metadata[1494]: Feb 13 20:51:57.354 INFO Fetch successful Feb 13 20:51:57.355387 coreos-metadata[1494]: Feb 13 20:51:57.354 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 20:51:57.369121 coreos-metadata[1494]: Feb 13 20:51:57.369 INFO Fetch successful Feb 13 20:51:57.379417 unknown[1494]: wrote ssh authorized keys file for user: core Feb 13 20:51:57.433345 update-ssh-keys[1613]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:51:57.434330 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:51:57.440237 systemd[1]: Finished sshkeys.service. Feb 13 20:51:57.442379 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:51:57.442889 systemd[1]: Startup finished in 1.253s (kernel) + 14.637s (initrd) + 11.126s (userspace) = 27.017s. Feb 13 20:52:04.912215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:52:04.927056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:52:05.306961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:52:05.310896 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:52:05.393757 kubelet[1625]: E0213 20:52:05.391594 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:52:05.400771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:52:05.401091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:52:07.221215 systemd[1]: Started sshd@3-172.24.4.171:22-172.24.4.1:47878.service - OpenSSH per-connection server daemon (172.24.4.1:47878). Feb 13 20:52:08.763775 sshd[1634]: Accepted publickey for core from 172.24.4.1 port 47878 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:52:08.766811 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:08.778720 systemd-logind[1442]: New session 6 of user core. Feb 13 20:52:08.784903 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:52:09.408653 sshd[1636]: Connection closed by 172.24.4.1 port 47878 Feb 13 20:52:09.409174 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:09.423854 systemd[1]: sshd@3-172.24.4.171:22-172.24.4.1:47878.service: Deactivated successfully. Feb 13 20:52:09.427527 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:52:09.430760 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:52:09.441183 systemd[1]: Started sshd@4-172.24.4.171:22-172.24.4.1:47882.service - OpenSSH per-connection server daemon (172.24.4.1:47882). Feb 13 20:52:09.444049 systemd-logind[1442]: Removed session 6. Feb 13 20:52:10.890259 sshd[1641]: Accepted publickey for core from 172.24.4.1 port 47882 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:52:10.893185 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:10.902398 systemd-logind[1442]: New session 7 of user core. Feb 13 20:52:10.914882 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:52:11.674244 sshd[1643]: Connection closed by 172.24.4.1 port 47882 Feb 13 20:52:11.674870 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:11.686109 systemd[1]: sshd@4-172.24.4.171:22-172.24.4.1:47882.service: Deactivated successfully. Feb 13 20:52:11.689215 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:52:11.691834 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:52:11.708337 systemd[1]: Started sshd@5-172.24.4.171:22-172.24.4.1:47886.service - OpenSSH per-connection server daemon (172.24.4.1:47886). Feb 13 20:52:11.710975 systemd-logind[1442]: Removed session 7. Feb 13 20:52:13.082734 sshd[1648]: Accepted publickey for core from 172.24.4.1 port 47886 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:52:13.085496 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:13.097460 systemd-logind[1442]: New session 8 of user core. Feb 13 20:52:13.109915 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:52:13.655854 sshd[1650]: Connection closed by 172.24.4.1 port 47886 Feb 13 20:52:13.657025 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:13.671150 systemd[1]: sshd@5-172.24.4.171:22-172.24.4.1:47886.service: Deactivated successfully. Feb 13 20:52:13.676434 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:52:13.682739 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:52:13.689675 systemd[1]: Started sshd@6-172.24.4.171:22-172.24.4.1:59268.service - OpenSSH per-connection server daemon (172.24.4.1:59268). Feb 13 20:52:13.694555 systemd-logind[1442]: Removed session 8. Feb 13 20:52:14.877436 sshd[1655]: Accepted publickey for core from 172.24.4.1 port 59268 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:52:14.883080 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:14.894729 systemd-logind[1442]: New session 9 of user core. Feb 13 20:52:14.900886 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:52:15.346995 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:52:15.348452 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:52:15.368271 sudo[1658]: pam_unix(sudo:session): session closed for user root Feb 13 20:52:15.411860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:52:15.436174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:52:15.520684 sshd[1657]: Connection closed by 172.24.4.1 port 59268 Feb 13 20:52:15.520117 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:15.533963 systemd[1]: sshd@6-172.24.4.171:22-172.24.4.1:59268.service: Deactivated successfully. Feb 13 20:52:15.537277 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:52:15.540934 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:52:15.548888 systemd[1]: Started sshd@7-172.24.4.171:22-172.24.4.1:59274.service - OpenSSH per-connection server daemon (172.24.4.1:59274). Feb 13 20:52:15.551703 systemd-logind[1442]: Removed session 9. Feb 13 20:52:15.782927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:52:15.787243 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:52:15.938923 kubelet[1673]: E0213 20:52:15.938813 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:52:15.943050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:52:15.943430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:52:16.842864 sshd[1666]: Accepted publickey for core from 172.24.4.1 port 59274 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:52:16.845798 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:16.858916 systemd-logind[1442]: New session 10 of user core. Feb 13 20:52:16.870009 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:52:17.319296 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:52:17.319995 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:52:17.328512 sudo[1682]: pam_unix(sudo:session): session closed for user root Feb 13 20:52:17.341113 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 20:52:17.341825 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:52:17.374336 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:52:17.447571 augenrules[1704]: No rules Feb 13 20:52:17.450035 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:52:17.450475 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:52:17.453396 sudo[1681]: pam_unix(sudo:session): session closed for user root Feb 13 20:52:17.772274 sshd[1680]: Connection closed by 172.24.4.1 port 59274 Feb 13 20:52:17.773226 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:17.788647 systemd[1]: sshd@7-172.24.4.171:22-172.24.4.1:59274.service: Deactivated successfully. Feb 13 20:52:17.792347 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:52:17.796219 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:52:17.805196 systemd[1]: Started sshd@8-172.24.4.171:22-172.24.4.1:59280.service - OpenSSH per-connection server daemon (172.24.4.1:59280). Feb 13 20:52:17.808905 systemd-logind[1442]: Removed session 10. Feb 13 20:52:19.148121 sshd[1712]: Accepted publickey for core from 172.24.4.1 port 59280 ssh2: RSA SHA256:CiNv2FRkKpV7sRB9915rSbtRQfhdW0517nbT6Tnktqk Feb 13 20:52:19.151898 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:19.173263 systemd-logind[1442]: New session 11 of user core. Feb 13 20:52:19.188052 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:52:19.568856 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:52:19.569499 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:52:22.218817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:52:22.233529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:52:22.297568 systemd[1]: Reloading requested from client PID 1747 ('systemctl') (unit session-11.scope)... Feb 13 20:52:22.297589 systemd[1]: Reloading... Feb 13 20:52:22.390627 zram_generator::config[1783]: No configuration found. Feb 13 20:52:22.841953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:52:22.934360 systemd[1]: Reloading finished in 636 ms. Feb 13 20:52:22.996211 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:52:22.996303 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:52:22.996696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:52:23.002893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:52:23.132743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:52:23.144132 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:52:23.510698 kubelet[1852]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:52:23.510698 kubelet[1852]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:52:23.510698 kubelet[1852]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:52:23.511468 kubelet[1852]: I0213 20:52:23.510837 1852 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:52:23.958259 kubelet[1852]: I0213 20:52:23.958164 1852 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:52:23.959299 kubelet[1852]: I0213 20:52:23.958502 1852 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:52:23.959299 kubelet[1852]: I0213 20:52:23.959199 1852 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:52:24.000626 kubelet[1852]: I0213 20:52:23.999242 1852 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:52:24.015191 kubelet[1852]: E0213 20:52:24.015149 1852 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:52:24.015392 kubelet[1852]: I0213 20:52:24.015378 1852 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:52:24.020376 kubelet[1852]: I0213 20:52:24.020351 1852 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:52:24.020562 kubelet[1852]: I0213 20:52:24.020547 1852 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:52:24.020802 kubelet[1852]: I0213 20:52:24.020755 1852 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:52:24.021074 kubelet[1852]: I0213 20:52:24.020864 1852 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.171","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:52:24.021214 kubelet[1852]: I0213 20:52:24.021203 1852 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:52:24.021276 kubelet[1852]: I0213 20:52:24.021268 1852 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:52:24.021453 kubelet[1852]: I0213 20:52:24.021440 1852 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:52:24.024920 kubelet[1852]: I0213 20:52:24.024906 1852 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:52:24.024997 kubelet[1852]: I0213 20:52:24.024988 1852 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:52:24.025098 kubelet[1852]: I0213 20:52:24.025074 1852 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:52:24.025167 kubelet[1852]: I0213 20:52:24.025158 1852 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:52:24.032595 kubelet[1852]: E0213 20:52:24.032515 1852 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:24.032712 kubelet[1852]: E0213 20:52:24.032689 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:24.034237 kubelet[1852]: I0213 20:52:24.034206 1852 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:52:24.036804 kubelet[1852]: I0213 20:52:24.036745 1852 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:52:24.037871 kubelet[1852]: W0213 20:52:24.037815 1852 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:52:24.038443 kubelet[1852]: I0213 20:52:24.038406 1852 server.go:1269] "Started kubelet" Feb 13 20:52:24.039101 kubelet[1852]: I0213 20:52:24.038821 1852 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:52:24.041762 kubelet[1852]: I0213 20:52:24.041713 1852 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:52:24.046617 kubelet[1852]: I0213 20:52:24.045707 1852 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:52:24.046617 kubelet[1852]: I0213 20:52:24.045988 1852 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:52:24.047374 kubelet[1852]: I0213 20:52:24.047333 1852 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:52:24.047553 kubelet[1852]: I0213 20:52:24.047538 1852 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:52:24.049333 kubelet[1852]: I0213 20:52:24.049310 1852 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:52:24.049536 kubelet[1852]: I0213 20:52:24.049523 1852 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:52:24.049663 kubelet[1852]: I0213 20:52:24.049639 1852 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:52:24.051520 kubelet[1852]: E0213 20:52:24.051503 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.054096 kubelet[1852]: W0213 20:52:24.054075 1852 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 20:52:24.054564 kubelet[1852]: E0213 20:52:24.054541 1852 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 20:52:24.059533 kubelet[1852]: I0213 20:52:24.059459 1852 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:52:24.063068 kubelet[1852]: E0213 20:52:24.063043 1852 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:52:24.064385 kubelet[1852]: I0213 20:52:24.064346 1852 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:52:24.064480 kubelet[1852]: I0213 20:52:24.064470 1852 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:52:24.093622 kubelet[1852]: W0213 20:52:24.091563 1852 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:52:24.093622 kubelet[1852]: E0213 20:52:24.091665 1852 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:52:24.093622 kubelet[1852]: E0213 20:52:24.091733 1852 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.171\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 20:52:24.093622 kubelet[1852]: W0213 20:52:24.092025 1852 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:52:24.093622 kubelet[1852]: E0213 20:52:24.092045 1852 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.171\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:52:24.093868 kubelet[1852]: E0213 20:52:24.075273 1852 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.171.1823dfcafe52f4a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.171,UID:172.24.4.171,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.171,},FirstTimestamp:2025-02-13 20:52:24.038380711 +0000 UTC m=+0.890229845,LastTimestamp:2025-02-13 20:52:24.038380711 +0000 UTC m=+0.890229845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.171,}" Feb 13 20:52:24.096616 kubelet[1852]: E0213 20:52:24.094798 1852 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.171.1823dfcaffcb0e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.171,UID:172.24.4.171,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.171,},FirstTimestamp:2025-02-13 20:52:24.063028757 +0000 UTC m=+0.914877901,LastTimestamp:2025-02-13 20:52:24.063028757 +0000 UTC m=+0.914877901,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.171,}" Feb 13 20:52:24.106645 kubelet[1852]: I0213 20:52:24.106512 1852 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:52:24.106645 kubelet[1852]: I0213 20:52:24.106535 1852 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:52:24.106645 kubelet[1852]: I0213 20:52:24.106555 1852 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:52:24.116330 kubelet[1852]: I0213 20:52:24.115694 1852 policy_none.go:49] "None policy: Start" Feb 13 20:52:24.117575 kubelet[1852]: I0213 20:52:24.117535 1852 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:52:24.117575 kubelet[1852]: I0213 20:52:24.117565 1852 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:52:24.126498 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:52:24.152634 kubelet[1852]: E0213 20:52:24.152401 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.154677 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:52:24.164379 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:52:24.167193 kubelet[1852]: I0213 20:52:24.166374 1852 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:52:24.167193 kubelet[1852]: I0213 20:52:24.166557 1852 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:52:24.167193 kubelet[1852]: I0213 20:52:24.166574 1852 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:52:24.167193 kubelet[1852]: I0213 20:52:24.167110 1852 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:52:24.169621 kubelet[1852]: E0213 20:52:24.169584 1852 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.171\" not found" Feb 13 20:52:24.171187 kubelet[1852]: I0213 20:52:24.171150 1852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:52:24.174253 kubelet[1852]: I0213 20:52:24.174186 1852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:52:24.174305 kubelet[1852]: I0213 20:52:24.174273 1852 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:52:24.174341 kubelet[1852]: I0213 20:52:24.174321 1852 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:52:24.174597 kubelet[1852]: E0213 20:52:24.174501 1852 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 20:52:24.271694 kubelet[1852]: I0213 20:52:24.269526 1852 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.171" Feb 13 20:52:24.280747 kubelet[1852]: I0213 20:52:24.280693 1852 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.171" Feb 13 20:52:24.280747 kubelet[1852]: E0213 20:52:24.280749 1852 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.171\": node \"172.24.4.171\" not found" Feb 13 20:52:24.310556 kubelet[1852]: E0213 20:52:24.310446 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.411128 kubelet[1852]: E0213 20:52:24.411036 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.511385 kubelet[1852]: E0213 20:52:24.511296 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.612505 kubelet[1852]: E0213 20:52:24.612196 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.687811 sudo[1715]: pam_unix(sudo:session): session closed for user root Feb 13 20:52:24.717049 kubelet[1852]: E0213 20:52:24.716896 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.818003 kubelet[1852]: E0213 20:52:24.817910 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.848796 sshd[1714]: Connection closed by 172.24.4.1 port 59280 Feb 13 20:52:24.849902 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:24.858451 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:52:24.858719 systemd[1]: sshd@8-172.24.4.171:22-172.24.4.1:59280.service: Deactivated successfully. Feb 13 20:52:24.862215 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:52:24.862991 systemd[1]: session-11.scope: Consumed 1.068s CPU time, 75.0M memory peak, 0B memory swap peak. Feb 13 20:52:24.867075 systemd-logind[1442]: Removed session 11. Feb 13 20:52:24.919167 kubelet[1852]: E0213 20:52:24.919041 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:24.963042 kubelet[1852]: I0213 20:52:24.962951 1852 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 20:52:24.963438 kubelet[1852]: W0213 20:52:24.963361 1852 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:52:25.019848 kubelet[1852]: E0213 20:52:25.019738 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:25.033478 kubelet[1852]: E0213 20:52:25.033388 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:25.121305 kubelet[1852]: E0213 20:52:25.120888 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:25.221162 kubelet[1852]: E0213 20:52:25.221079 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:25.322315 kubelet[1852]: E0213 20:52:25.322205 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:25.423307 kubelet[1852]: E0213 20:52:25.423202 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:25.523557 kubelet[1852]: E0213 20:52:25.523433 1852 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.171\" not found" Feb 13 20:52:25.625988 kubelet[1852]: I0213 20:52:25.625936 1852 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 20:52:25.627833 containerd[1459]: time="2025-02-13T20:52:25.627705294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:52:25.629947 kubelet[1852]: I0213 20:52:25.628151 1852 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 20:52:26.033261 kubelet[1852]: I0213 20:52:26.033041 1852 apiserver.go:52] "Watching apiserver" Feb 13 20:52:26.033883 kubelet[1852]: E0213 20:52:26.033818 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:26.080039 systemd[1]: Created slice kubepods-besteffort-pod358b11af_2fe9_4ee1_bafa_6711cce1b7b1.slice - libcontainer container kubepods-besteffort-pod358b11af_2fe9_4ee1_bafa_6711cce1b7b1.slice. Feb 13 20:52:26.108794 systemd[1]: Created slice kubepods-burstable-podc744909b_22b6_420b_8e32_54ee72b34026.slice - libcontainer container kubepods-burstable-podc744909b_22b6_420b_8e32_54ee72b34026.slice. Feb 13 20:52:26.151065 kubelet[1852]: I0213 20:52:26.150973 1852 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:52:26.162715 kubelet[1852]: I0213 20:52:26.161571 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-xtables-lock\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.162715 kubelet[1852]: I0213 20:52:26.161711 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c744909b-22b6-420b-8e32-54ee72b34026-clustermesh-secrets\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.162715 kubelet[1852]: I0213 20:52:26.161768 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c744909b-22b6-420b-8e32-54ee72b34026-cilium-config-path\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.162715 kubelet[1852]: I0213 20:52:26.161808 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-hostproc\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.162715 kubelet[1852]: I0213 20:52:26.161851 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-lib-modules\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.162715 kubelet[1852]: I0213 20:52:26.161918 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lns\" (UniqueName: \"kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-kube-api-access-k9lns\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163196 kubelet[1852]: I0213 20:52:26.161961 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbd4n\" (UniqueName: \"kubernetes.io/projected/358b11af-2fe9-4ee1-bafa-6711cce1b7b1-kube-api-access-nbd4n\") pod \"kube-proxy-drzrh\" (UID: \"358b11af-2fe9-4ee1-bafa-6711cce1b7b1\") " pod="kube-system/kube-proxy-drzrh" Feb 13 20:52:26.163196 kubelet[1852]: I0213 20:52:26.162013 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cni-path\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163196 kubelet[1852]: I0213 20:52:26.162051 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-etc-cni-netd\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163196 kubelet[1852]: I0213 20:52:26.162092 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-hubble-tls\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163196 kubelet[1852]: I0213 20:52:26.162131 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/358b11af-2fe9-4ee1-bafa-6711cce1b7b1-xtables-lock\") pod \"kube-proxy-drzrh\" (UID: \"358b11af-2fe9-4ee1-bafa-6711cce1b7b1\") " pod="kube-system/kube-proxy-drzrh" Feb 13 20:52:26.163196 kubelet[1852]: I0213 20:52:26.162168 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-bpf-maps\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163551 kubelet[1852]: I0213 20:52:26.162205 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-net\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163551 kubelet[1852]: I0213 20:52:26.162242 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-kernel\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163551 kubelet[1852]: I0213 20:52:26.162280 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/358b11af-2fe9-4ee1-bafa-6711cce1b7b1-kube-proxy\") pod \"kube-proxy-drzrh\" (UID: \"358b11af-2fe9-4ee1-bafa-6711cce1b7b1\") " pod="kube-system/kube-proxy-drzrh" Feb 13 20:52:26.163551 kubelet[1852]: I0213 20:52:26.162320 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/358b11af-2fe9-4ee1-bafa-6711cce1b7b1-lib-modules\") pod \"kube-proxy-drzrh\" (UID: \"358b11af-2fe9-4ee1-bafa-6711cce1b7b1\") " pod="kube-system/kube-proxy-drzrh" Feb 13 20:52:26.163551 kubelet[1852]: I0213 20:52:26.162358 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-run\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.163551 kubelet[1852]: I0213 20:52:26.162398 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-cgroup\") pod \"cilium-tqnr4\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " pod="kube-system/cilium-tqnr4" Feb 13 20:52:26.400981 containerd[1459]: time="2025-02-13T20:52:26.400840007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drzrh,Uid:358b11af-2fe9-4ee1-bafa-6711cce1b7b1,Namespace:kube-system,Attempt:0,}" Feb 13 20:52:26.425779 containerd[1459]: time="2025-02-13T20:52:26.425675158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqnr4,Uid:c744909b-22b6-420b-8e32-54ee72b34026,Namespace:kube-system,Attempt:0,}" Feb 13 20:52:27.035001 kubelet[1852]: E0213 20:52:27.034884 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:27.109782 containerd[1459]: time="2025-02-13T20:52:27.109334555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:52:27.113242 containerd[1459]: time="2025-02-13T20:52:27.113120369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:52:27.115474 containerd[1459]: time="2025-02-13T20:52:27.115328783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 20:52:27.120900 containerd[1459]: time="2025-02-13T20:52:27.120579204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:52:27.122300 containerd[1459]: time="2025-02-13T20:52:27.122150212Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:52:27.130208 containerd[1459]: time="2025-02-13T20:52:27.130065443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:52:27.133522 containerd[1459]: time="2025-02-13T20:52:27.132490245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 731.512163ms" Feb 13 20:52:27.139784 containerd[1459]: time="2025-02-13T20:52:27.139405321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 713.537941ms" Feb 13 20:52:27.281795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553239356.mount: Deactivated successfully. Feb 13 20:52:27.342869 containerd[1459]: time="2025-02-13T20:52:27.341620184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:52:27.343888 containerd[1459]: time="2025-02-13T20:52:27.343230970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:52:27.343888 containerd[1459]: time="2025-02-13T20:52:27.343277533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:52:27.343888 containerd[1459]: time="2025-02-13T20:52:27.343292162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:52:27.343888 containerd[1459]: time="2025-02-13T20:52:27.343365237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:52:27.343888 containerd[1459]: time="2025-02-13T20:52:27.342838762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:52:27.343888 containerd[1459]: time="2025-02-13T20:52:27.342876266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:52:27.343888 containerd[1459]: time="2025-02-13T20:52:27.342968659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:52:27.458800 systemd[1]: Started cri-containerd-2186d5568aadb202d6c0edd547319144ddb8b2752e68b3b3f6cfd0ce2e64efbe.scope - libcontainer container 2186d5568aadb202d6c0edd547319144ddb8b2752e68b3b3f6cfd0ce2e64efbe. Feb 13 20:52:27.460234 systemd[1]: Started cri-containerd-78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48.scope - libcontainer container 78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48. Feb 13 20:52:27.503166 containerd[1459]: time="2025-02-13T20:52:27.503121755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drzrh,Uid:358b11af-2fe9-4ee1-bafa-6711cce1b7b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2186d5568aadb202d6c0edd547319144ddb8b2752e68b3b3f6cfd0ce2e64efbe\"" Feb 13 20:52:27.508006 containerd[1459]: time="2025-02-13T20:52:27.507525094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:52:27.515357 containerd[1459]: time="2025-02-13T20:52:27.515304576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqnr4,Uid:c744909b-22b6-420b-8e32-54ee72b34026,Namespace:kube-system,Attempt:0,} returns sandbox id \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\"" Feb 13 20:52:28.035509 kubelet[1852]: E0213 20:52:28.035327 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:29.035773 kubelet[1852]: E0213 20:52:29.035659 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:29.446840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297874238.mount: Deactivated successfully. Feb 13 20:52:30.036449 kubelet[1852]: E0213 20:52:30.036402 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:30.272810 containerd[1459]: time="2025-02-13T20:52:30.272722295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:30.274483 containerd[1459]: time="2025-02-13T20:52:30.274207548Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229116" Feb 13 20:52:30.275910 containerd[1459]: time="2025-02-13T20:52:30.275841284Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:30.278705 containerd[1459]: time="2025-02-13T20:52:30.278661205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:30.279710 containerd[1459]: time="2025-02-13T20:52:30.279476691Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.771901557s" Feb 13 20:52:30.279710 containerd[1459]: time="2025-02-13T20:52:30.279529214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:52:30.281480 containerd[1459]: time="2025-02-13T20:52:30.281434294Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 20:52:30.282664 containerd[1459]: time="2025-02-13T20:52:30.282592093Z" level=info msg="CreateContainer within sandbox \"2186d5568aadb202d6c0edd547319144ddb8b2752e68b3b3f6cfd0ce2e64efbe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:52:30.301142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617570005.mount: Deactivated successfully. Feb 13 20:52:30.309545 containerd[1459]: time="2025-02-13T20:52:30.309490030Z" level=info msg="CreateContainer within sandbox \"2186d5568aadb202d6c0edd547319144ddb8b2752e68b3b3f6cfd0ce2e64efbe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d264b8e5f45ff8b603e2270f91e078086bfc4b22be3f2cb021f0aac2abb4af2\"" Feb 13 20:52:30.310346 containerd[1459]: time="2025-02-13T20:52:30.310224366Z" level=info msg="StartContainer for \"2d264b8e5f45ff8b603e2270f91e078086bfc4b22be3f2cb021f0aac2abb4af2\"" Feb 13 20:52:30.351018 systemd[1]: Started cri-containerd-2d264b8e5f45ff8b603e2270f91e078086bfc4b22be3f2cb021f0aac2abb4af2.scope - libcontainer container 2d264b8e5f45ff8b603e2270f91e078086bfc4b22be3f2cb021f0aac2abb4af2. Feb 13 20:52:30.389351 containerd[1459]: time="2025-02-13T20:52:30.389282619Z" level=info msg="StartContainer for \"2d264b8e5f45ff8b603e2270f91e078086bfc4b22be3f2cb021f0aac2abb4af2\" returns successfully" Feb 13 20:52:31.037437 kubelet[1852]: E0213 20:52:31.037334 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:31.245444 kubelet[1852]: I0213 20:52:31.245193 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-drzrh" podStartSLOduration=4.471150944 podStartE2EDuration="7.245153632s" podCreationTimestamp="2025-02-13 20:52:24 +0000 UTC" firstStartedPulling="2025-02-13 20:52:27.506655929 +0000 UTC m=+4.358505073" lastFinishedPulling="2025-02-13 20:52:30.280658627 +0000 UTC m=+7.132507761" observedRunningTime="2025-02-13 20:52:31.245029568 +0000 UTC m=+8.096878813" watchObservedRunningTime="2025-02-13 20:52:31.245153632 +0000 UTC m=+8.097002886" Feb 13 20:52:32.039509 kubelet[1852]: E0213 20:52:32.037701 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:33.038710 kubelet[1852]: E0213 20:52:33.038495 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:34.039718 kubelet[1852]: E0213 20:52:34.039531 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:35.008824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150832442.mount: Deactivated successfully. Feb 13 20:52:35.040587 kubelet[1852]: E0213 20:52:35.040535 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:35.431060 update_engine[1443]: I20250213 20:52:35.430929 1443 update_attempter.cc:509] Updating boot flags... Feb 13 20:52:35.511695 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2178) Feb 13 20:52:35.602785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2177) Feb 13 20:52:35.760397 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2177) Feb 13 20:52:36.041019 kubelet[1852]: E0213 20:52:36.040979 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:37.041140 kubelet[1852]: E0213 20:52:37.041089 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:38.041256 kubelet[1852]: E0213 20:52:38.041215 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:39.043002 kubelet[1852]: E0213 20:52:39.042918 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:39.122584 containerd[1459]: time="2025-02-13T20:52:39.122437962Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:39.126251 containerd[1459]: time="2025-02-13T20:52:39.126112121Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 20:52:39.129669 containerd[1459]: time="2025-02-13T20:52:39.128163919Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:39.134498 containerd[1459]: time="2025-02-13T20:52:39.134380277Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.852874362s" Feb 13 20:52:39.134961 containerd[1459]: time="2025-02-13T20:52:39.134899470Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 20:52:39.142022 containerd[1459]: time="2025-02-13T20:52:39.141680169Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:52:39.188837 containerd[1459]: time="2025-02-13T20:52:39.188755560Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\"" Feb 13 20:52:39.190492 containerd[1459]: time="2025-02-13T20:52:39.190443646Z" level=info msg="StartContainer for \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\"" Feb 13 20:52:39.252832 systemd[1]: Started cri-containerd-794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82.scope - libcontainer container 794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82. Feb 13 20:52:39.296631 containerd[1459]: time="2025-02-13T20:52:39.296401011Z" level=info msg="StartContainer for \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\" returns successfully" Feb 13 20:52:39.306480 systemd[1]: cri-containerd-794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82.scope: Deactivated successfully. Feb 13 20:52:40.044338 kubelet[1852]: E0213 20:52:40.044179 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:40.174213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82-rootfs.mount: Deactivated successfully. Feb 13 20:52:40.235940 containerd[1459]: time="2025-02-13T20:52:40.235555018Z" level=info msg="shim disconnected" id=794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82 namespace=k8s.io Feb 13 20:52:40.235940 containerd[1459]: time="2025-02-13T20:52:40.235772077Z" level=warning msg="cleaning up after shim disconnected" id=794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82 namespace=k8s.io Feb 13 20:52:40.235940 containerd[1459]: time="2025-02-13T20:52:40.235796474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:52:41.045565 kubelet[1852]: E0213 20:52:41.045446 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:41.266834 containerd[1459]: time="2025-02-13T20:52:41.266303906Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:52:41.316506 containerd[1459]: time="2025-02-13T20:52:41.316254831Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\"" Feb 13 20:52:41.318679 containerd[1459]: time="2025-02-13T20:52:41.317880057Z" level=info msg="StartContainer for \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\"" Feb 13 20:52:41.371980 systemd[1]: Started cri-containerd-50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305.scope - libcontainer container 50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305. Feb 13 20:52:41.409958 containerd[1459]: time="2025-02-13T20:52:41.409855264Z" level=info msg="StartContainer for \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\" returns successfully" Feb 13 20:52:41.419121 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:52:41.419795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:52:41.420032 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:52:41.426884 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:52:41.427132 systemd[1]: cri-containerd-50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305.scope: Deactivated successfully. Feb 13 20:52:41.445129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:52:41.448113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305-rootfs.mount: Deactivated successfully. Feb 13 20:52:41.456691 containerd[1459]: time="2025-02-13T20:52:41.456591567Z" level=info msg="shim disconnected" id=50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305 namespace=k8s.io Feb 13 20:52:41.456777 containerd[1459]: time="2025-02-13T20:52:41.456689556Z" level=warning msg="cleaning up after shim disconnected" id=50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305 namespace=k8s.io Feb 13 20:52:41.456777 containerd[1459]: time="2025-02-13T20:52:41.456720015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:52:42.046466 kubelet[1852]: E0213 20:52:42.046356 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:42.275183 containerd[1459]: time="2025-02-13T20:52:42.274671638Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:52:42.310983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405151446.mount: Deactivated successfully. Feb 13 20:52:42.321906 containerd[1459]: time="2025-02-13T20:52:42.321772111Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\"" Feb 13 20:52:42.324680 containerd[1459]: time="2025-02-13T20:52:42.323072918Z" level=info msg="StartContainer for \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\"" Feb 13 20:52:42.384800 systemd[1]: Started cri-containerd-b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034.scope - libcontainer container b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034. Feb 13 20:52:42.425868 containerd[1459]: time="2025-02-13T20:52:42.425817480Z" level=info msg="StartContainer for \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\" returns successfully" Feb 13 20:52:42.426135 systemd[1]: cri-containerd-b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034.scope: Deactivated successfully. Feb 13 20:52:42.460702 containerd[1459]: time="2025-02-13T20:52:42.460575870Z" level=info msg="shim disconnected" id=b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034 namespace=k8s.io Feb 13 20:52:42.460945 containerd[1459]: time="2025-02-13T20:52:42.460697505Z" level=warning msg="cleaning up after shim disconnected" id=b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034 namespace=k8s.io Feb 13 20:52:42.460945 containerd[1459]: time="2025-02-13T20:52:42.460729677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:52:43.047364 kubelet[1852]: E0213 20:52:43.047184 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:43.285073 containerd[1459]: time="2025-02-13T20:52:43.285000842Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:52:43.303144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034-rootfs.mount: Deactivated successfully. Feb 13 20:52:43.340591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535831572.mount: Deactivated successfully. Feb 13 20:52:43.345947 containerd[1459]: time="2025-02-13T20:52:43.345649290Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\"" Feb 13 20:52:43.347371 containerd[1459]: time="2025-02-13T20:52:43.347202570Z" level=info msg="StartContainer for \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\"" Feb 13 20:52:43.401846 systemd[1]: Started cri-containerd-69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2.scope - libcontainer container 69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2. Feb 13 20:52:43.426294 systemd[1]: cri-containerd-69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2.scope: Deactivated successfully. Feb 13 20:52:43.434414 containerd[1459]: time="2025-02-13T20:52:43.434331577Z" level=info msg="StartContainer for \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\" returns successfully" Feb 13 20:52:43.472555 containerd[1459]: time="2025-02-13T20:52:43.472474205Z" level=info msg="shim disconnected" id=69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2 namespace=k8s.io Feb 13 20:52:43.472555 containerd[1459]: time="2025-02-13T20:52:43.472536285Z" level=warning msg="cleaning up after shim disconnected" id=69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2 namespace=k8s.io Feb 13 20:52:43.472555 containerd[1459]: time="2025-02-13T20:52:43.472548589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:52:44.025704 kubelet[1852]: E0213 20:52:44.025556 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:44.047855 kubelet[1852]: E0213 20:52:44.047794 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:44.293474 containerd[1459]: time="2025-02-13T20:52:44.293269712Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:52:44.306976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2-rootfs.mount: Deactivated successfully. Feb 13 20:52:44.401374 containerd[1459]: time="2025-02-13T20:52:44.401159055Z" level=info msg="CreateContainer within sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\"" Feb 13 20:52:44.404671 containerd[1459]: time="2025-02-13T20:52:44.402494693Z" level=info msg="StartContainer for \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\"" Feb 13 20:52:44.462014 systemd[1]: Started cri-containerd-82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620.scope - libcontainer container 82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620. Feb 13 20:52:44.503732 containerd[1459]: time="2025-02-13T20:52:44.503664195Z" level=info msg="StartContainer for \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\" returns successfully" Feb 13 20:52:44.614916 kubelet[1852]: I0213 20:52:44.614128 1852 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:52:44.942693 kernel: Initializing XFRM netlink socket Feb 13 20:52:45.048131 kubelet[1852]: E0213 20:52:45.048063 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:45.304406 systemd[1]: run-containerd-runc-k8s.io-82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620-runc.QTEgwa.mount: Deactivated successfully. Feb 13 20:52:46.048955 kubelet[1852]: E0213 20:52:46.048842 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:46.652516 systemd-networkd[1363]: cilium_host: Link UP Feb 13 20:52:46.656724 systemd-networkd[1363]: cilium_net: Link UP Feb 13 20:52:46.656737 systemd-networkd[1363]: cilium_net: Gained carrier Feb 13 20:52:46.657130 systemd-networkd[1363]: cilium_host: Gained carrier Feb 13 20:52:46.657545 systemd-networkd[1363]: cilium_host: Gained IPv6LL Feb 13 20:52:46.791057 systemd-networkd[1363]: cilium_vxlan: Link UP Feb 13 20:52:46.791066 systemd-networkd[1363]: cilium_vxlan: Gained carrier Feb 13 20:52:47.049366 kubelet[1852]: E0213 20:52:47.049252 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:47.121760 kernel: NET: Registered PF_ALG protocol family Feb 13 20:52:47.518882 systemd-networkd[1363]: cilium_net: Gained IPv6LL Feb 13 20:52:48.049695 kubelet[1852]: E0213 20:52:48.049558 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:48.070999 systemd-networkd[1363]: lxc_health: Link UP Feb 13 20:52:48.086565 systemd-networkd[1363]: lxc_health: Gained carrier Feb 13 20:52:48.094819 systemd-networkd[1363]: cilium_vxlan: Gained IPv6LL Feb 13 20:52:48.465020 kubelet[1852]: I0213 20:52:48.464942 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tqnr4" podStartSLOduration=12.84456071 podStartE2EDuration="24.464906082s" podCreationTimestamp="2025-02-13 20:52:24 +0000 UTC" firstStartedPulling="2025-02-13 20:52:27.517194958 +0000 UTC m=+4.369044092" lastFinishedPulling="2025-02-13 20:52:39.13754026 +0000 UTC m=+15.989389464" observedRunningTime="2025-02-13 20:52:45.336319332 +0000 UTC m=+22.188168516" watchObservedRunningTime="2025-02-13 20:52:48.464906082 +0000 UTC m=+25.316755216" Feb 13 20:52:49.050687 kubelet[1852]: E0213 20:52:49.050543 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:49.595438 systemd[1]: Created slice kubepods-besteffort-pod5704e059_cce0_4d3f_b0d3_1f06fa4a1883.slice - libcontainer container kubepods-besteffort-pod5704e059_cce0_4d3f_b0d3_1f06fa4a1883.slice. Feb 13 20:52:49.642508 kubelet[1852]: I0213 20:52:49.642404 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trgtj\" (UniqueName: \"kubernetes.io/projected/5704e059-cce0-4d3f-b0d3-1f06fa4a1883-kube-api-access-trgtj\") pod \"nginx-deployment-8587fbcb89-sq4rc\" (UID: \"5704e059-cce0-4d3f-b0d3-1f06fa4a1883\") " pod="default/nginx-deployment-8587fbcb89-sq4rc" Feb 13 20:52:49.758931 systemd-networkd[1363]: lxc_health: Gained IPv6LL Feb 13 20:52:49.909010 containerd[1459]: time="2025-02-13T20:52:49.908040489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sq4rc,Uid:5704e059-cce0-4d3f-b0d3-1f06fa4a1883,Namespace:default,Attempt:0,}" Feb 13 20:52:49.993974 systemd-networkd[1363]: lxcb3c8da1c1bc6: Link UP Feb 13 20:52:50.000812 kernel: eth0: renamed from tmp5eef5 Feb 13 20:52:50.011014 systemd-networkd[1363]: lxcb3c8da1c1bc6: Gained carrier Feb 13 20:52:50.051148 kubelet[1852]: E0213 20:52:50.051084 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:51.052285 kubelet[1852]: E0213 20:52:51.052187 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:51.870928 systemd-networkd[1363]: lxcb3c8da1c1bc6: Gained IPv6LL Feb 13 20:52:52.053290 kubelet[1852]: E0213 20:52:52.053169 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:53.055842 kubelet[1852]: E0213 20:52:53.054833 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:53.823112 containerd[1459]: time="2025-02-13T20:52:53.822954389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:52:53.823112 containerd[1459]: time="2025-02-13T20:52:53.823063306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:52:53.823112 containerd[1459]: time="2025-02-13T20:52:53.823086460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:52:53.823949 containerd[1459]: time="2025-02-13T20:52:53.823183706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:52:53.868832 systemd[1]: Started cri-containerd-5eef56ff442caabfc0b4f78550ceadb0cbeec8b0846b7ba3362381fd7596ea6e.scope - libcontainer container 5eef56ff442caabfc0b4f78550ceadb0cbeec8b0846b7ba3362381fd7596ea6e. Feb 13 20:52:53.910012 containerd[1459]: time="2025-02-13T20:52:53.909949984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sq4rc,Uid:5704e059-cce0-4d3f-b0d3-1f06fa4a1883,Namespace:default,Attempt:0,} returns sandbox id \"5eef56ff442caabfc0b4f78550ceadb0cbeec8b0846b7ba3362381fd7596ea6e\"" Feb 13 20:52:53.912247 containerd[1459]: time="2025-02-13T20:52:53.912212193Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:52:54.055113 kubelet[1852]: E0213 20:52:54.055003 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:55.055999 kubelet[1852]: E0213 20:52:55.055904 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:56.056145 kubelet[1852]: E0213 20:52:56.056086 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:57.056389 kubelet[1852]: E0213 20:52:57.056350 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:57.444925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63067286.mount: Deactivated successfully. Feb 13 20:52:58.057686 kubelet[1852]: E0213 20:52:58.057647 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:52:58.777281 containerd[1459]: time="2025-02-13T20:52:58.777213076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:58.778861 containerd[1459]: time="2025-02-13T20:52:58.778586201Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 20:52:58.780037 containerd[1459]: time="2025-02-13T20:52:58.779969725Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:58.785391 containerd[1459]: time="2025-02-13T20:52:58.785367494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:52:58.786620 containerd[1459]: time="2025-02-13T20:52:58.786476175Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.874220298s" Feb 13 20:52:58.786620 containerd[1459]: time="2025-02-13T20:52:58.786509539Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:52:58.789027 containerd[1459]: time="2025-02-13T20:52:58.788960425Z" level=info msg="CreateContainer within sandbox \"5eef56ff442caabfc0b4f78550ceadb0cbeec8b0846b7ba3362381fd7596ea6e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 20:52:58.806060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942046870.mount: Deactivated successfully. Feb 13 20:52:58.809109 containerd[1459]: time="2025-02-13T20:52:58.809080158Z" level=info msg="CreateContainer within sandbox \"5eef56ff442caabfc0b4f78550ceadb0cbeec8b0846b7ba3362381fd7596ea6e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b3296dfbd11b6b02c6dbf46118f97acc1a2d495181d951ff30c63710c9382d8e\"" Feb 13 20:52:58.809854 containerd[1459]: time="2025-02-13T20:52:58.809836588Z" level=info msg="StartContainer for \"b3296dfbd11b6b02c6dbf46118f97acc1a2d495181d951ff30c63710c9382d8e\"" Feb 13 20:52:58.846767 systemd[1]: Started cri-containerd-b3296dfbd11b6b02c6dbf46118f97acc1a2d495181d951ff30c63710c9382d8e.scope - libcontainer container b3296dfbd11b6b02c6dbf46118f97acc1a2d495181d951ff30c63710c9382d8e. Feb 13 20:52:58.883949 containerd[1459]: time="2025-02-13T20:52:58.883902696Z" level=info msg="StartContainer for \"b3296dfbd11b6b02c6dbf46118f97acc1a2d495181d951ff30c63710c9382d8e\" returns successfully" Feb 13 20:52:59.059205 kubelet[1852]: E0213 20:52:59.058943 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:00.059835 kubelet[1852]: E0213 20:53:00.059728 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:01.060319 kubelet[1852]: E0213 20:53:01.060231 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:02.060819 kubelet[1852]: E0213 20:53:02.060727 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:03.061517 kubelet[1852]: E0213 20:53:03.061410 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:04.026191 kubelet[1852]: E0213 20:53:04.026073 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:04.062173 kubelet[1852]: E0213 20:53:04.061999 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:05.062812 kubelet[1852]: E0213 20:53:05.062749 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:06.063980 kubelet[1852]: E0213 20:53:06.063820 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:07.064477 kubelet[1852]: E0213 20:53:07.064313 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:08.064647 kubelet[1852]: E0213 20:53:08.064522 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:09.065840 kubelet[1852]: E0213 20:53:09.065748 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:10.066286 kubelet[1852]: E0213 20:53:10.066185 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:11.066909 kubelet[1852]: E0213 20:53:11.066827 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:11.739450 kubelet[1852]: I0213 20:53:11.739307 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-sq4rc" podStartSLOduration=17.86318647 podStartE2EDuration="22.739272854s" podCreationTimestamp="2025-02-13 20:52:49 +0000 UTC" firstStartedPulling="2025-02-13 20:52:53.911705335 +0000 UTC m=+30.763554469" lastFinishedPulling="2025-02-13 20:52:58.787791719 +0000 UTC m=+35.639640853" observedRunningTime="2025-02-13 20:52:59.368213736 +0000 UTC m=+36.220062920" watchObservedRunningTime="2025-02-13 20:53:11.739272854 +0000 UTC m=+48.591122038" Feb 13 20:53:11.753883 systemd[1]: Created slice kubepods-besteffort-pod9c795875_17a8_499f_9e4b_83a64c573515.slice - libcontainer container kubepods-besteffort-pod9c795875_17a8_499f_9e4b_83a64c573515.slice. Feb 13 20:53:11.801551 kubelet[1852]: I0213 20:53:11.801431 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwntz\" (UniqueName: \"kubernetes.io/projected/9c795875-17a8-499f-9e4b-83a64c573515-kube-api-access-zwntz\") pod \"nfs-server-provisioner-0\" (UID: \"9c795875-17a8-499f-9e4b-83a64c573515\") " pod="default/nfs-server-provisioner-0" Feb 13 20:53:11.801551 kubelet[1852]: I0213 20:53:11.801532 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9c795875-17a8-499f-9e4b-83a64c573515-data\") pod \"nfs-server-provisioner-0\" (UID: \"9c795875-17a8-499f-9e4b-83a64c573515\") " pod="default/nfs-server-provisioner-0" Feb 13 20:53:12.060418 containerd[1459]: time="2025-02-13T20:53:12.060060665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9c795875-17a8-499f-9e4b-83a64c573515,Namespace:default,Attempt:0,}" Feb 13 20:53:12.067559 kubelet[1852]: E0213 20:53:12.067478 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:12.141451 systemd-networkd[1363]: lxc8703ba163ebc: Link UP Feb 13 20:53:12.150732 kernel: eth0: renamed from tmpec803 Feb 13 20:53:12.163967 systemd-networkd[1363]: lxc8703ba163ebc: Gained carrier Feb 13 20:53:12.443494 containerd[1459]: time="2025-02-13T20:53:12.443113496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:53:12.443494 containerd[1459]: time="2025-02-13T20:53:12.443185593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:53:12.443494 containerd[1459]: time="2025-02-13T20:53:12.443201964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:12.443494 containerd[1459]: time="2025-02-13T20:53:12.443288508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:12.472915 systemd[1]: Started cri-containerd-ec803b1e304d02f8274a6b0cc6c6047beaa957b1caff932a804a8952d68de919.scope - libcontainer container ec803b1e304d02f8274a6b0cc6c6047beaa957b1caff932a804a8952d68de919. Feb 13 20:53:12.525105 containerd[1459]: time="2025-02-13T20:53:12.525040739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9c795875-17a8-499f-9e4b-83a64c573515,Namespace:default,Attempt:0,} returns sandbox id \"ec803b1e304d02f8274a6b0cc6c6047beaa957b1caff932a804a8952d68de919\"" Feb 13 20:53:12.528645 containerd[1459]: time="2025-02-13T20:53:12.528153713Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 20:53:13.068356 kubelet[1852]: E0213 20:53:13.068255 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:14.071740 kubelet[1852]: E0213 20:53:14.069431 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:14.079115 systemd-networkd[1363]: lxc8703ba163ebc: Gained IPv6LL Feb 13 20:53:15.069722 kubelet[1852]: E0213 20:53:15.069640 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:15.985528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226263634.mount: Deactivated successfully. Feb 13 20:53:16.070586 kubelet[1852]: E0213 20:53:16.070492 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:17.072226 kubelet[1852]: E0213 20:53:17.072180 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:18.072560 kubelet[1852]: E0213 20:53:18.072409 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:18.731487 containerd[1459]: time="2025-02-13T20:53:18.731253543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:53:18.732704 containerd[1459]: time="2025-02-13T20:53:18.732642375Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 20:53:18.734652 containerd[1459]: time="2025-02-13T20:53:18.734551403Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:53:18.738363 containerd[1459]: time="2025-02-13T20:53:18.738304565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:53:18.739580 containerd[1459]: time="2025-02-13T20:53:18.739428205Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.211207064s" Feb 13 20:53:18.739580 containerd[1459]: time="2025-02-13T20:53:18.739469132Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 20:53:18.742018 containerd[1459]: time="2025-02-13T20:53:18.741977205Z" level=info msg="CreateContainer within sandbox \"ec803b1e304d02f8274a6b0cc6c6047beaa957b1caff932a804a8952d68de919\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 20:53:18.771334 containerd[1459]: time="2025-02-13T20:53:18.771265949Z" level=info msg="CreateContainer within sandbox \"ec803b1e304d02f8274a6b0cc6c6047beaa957b1caff932a804a8952d68de919\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0083f4a7e27aa7ed2479dbbe4f35d4198219e51c9f52b5c95e9ab0e29c947d82\"" Feb 13 20:53:18.772121 containerd[1459]: time="2025-02-13T20:53:18.772071957Z" level=info msg="StartContainer for \"0083f4a7e27aa7ed2479dbbe4f35d4198219e51c9f52b5c95e9ab0e29c947d82\"" Feb 13 20:53:18.814903 systemd[1]: Started cri-containerd-0083f4a7e27aa7ed2479dbbe4f35d4198219e51c9f52b5c95e9ab0e29c947d82.scope - libcontainer container 0083f4a7e27aa7ed2479dbbe4f35d4198219e51c9f52b5c95e9ab0e29c947d82. Feb 13 20:53:18.860688 containerd[1459]: time="2025-02-13T20:53:18.860624691Z" level=info msg="StartContainer for \"0083f4a7e27aa7ed2479dbbe4f35d4198219e51c9f52b5c95e9ab0e29c947d82\" returns successfully" Feb 13 20:53:19.073654 kubelet[1852]: E0213 20:53:19.073517 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:19.595641 kubelet[1852]: I0213 20:53:19.595408 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.381946239 podStartE2EDuration="8.595376127s" podCreationTimestamp="2025-02-13 20:53:11 +0000 UTC" firstStartedPulling="2025-02-13 20:53:12.527371048 +0000 UTC m=+49.379220182" lastFinishedPulling="2025-02-13 20:53:18.740800936 +0000 UTC m=+55.592650070" observedRunningTime="2025-02-13 20:53:19.593049158 +0000 UTC m=+56.444898342" watchObservedRunningTime="2025-02-13 20:53:19.595376127 +0000 UTC m=+56.447225312" Feb 13 20:53:20.074286 kubelet[1852]: E0213 20:53:20.074204 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:21.074954 kubelet[1852]: E0213 20:53:21.074663 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:22.075784 kubelet[1852]: E0213 20:53:22.075698 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:23.077044 kubelet[1852]: E0213 20:53:23.076937 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:24.026003 kubelet[1852]: E0213 20:53:24.025896 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:24.078293 kubelet[1852]: E0213 20:53:24.078218 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:25.078647 kubelet[1852]: E0213 20:53:25.078512 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:26.080025 kubelet[1852]: E0213 20:53:26.079938 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:27.080255 kubelet[1852]: E0213 20:53:27.080143 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:28.080524 kubelet[1852]: E0213 20:53:28.080429 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:28.834219 systemd[1]: Created slice kubepods-besteffort-pod39b082bb_504b_4d9f_af35_e575a5a355a9.slice - libcontainer container kubepods-besteffort-pod39b082bb_504b_4d9f_af35_e575a5a355a9.slice. Feb 13 20:53:29.021408 kubelet[1852]: I0213 20:53:29.021205 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07c7e6b4-da48-4447-b37c-a42fe29f8f6b\" (UniqueName: \"kubernetes.io/nfs/39b082bb-504b-4d9f-af35-e575a5a355a9-pvc-07c7e6b4-da48-4447-b37c-a42fe29f8f6b\") pod \"test-pod-1\" (UID: \"39b082bb-504b-4d9f-af35-e575a5a355a9\") " pod="default/test-pod-1" Feb 13 20:53:29.021408 kubelet[1852]: I0213 20:53:29.021317 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5smf\" (UniqueName: \"kubernetes.io/projected/39b082bb-504b-4d9f-af35-e575a5a355a9-kube-api-access-k5smf\") pod \"test-pod-1\" (UID: \"39b082bb-504b-4d9f-af35-e575a5a355a9\") " pod="default/test-pod-1" Feb 13 20:53:29.081469 kubelet[1852]: E0213 20:53:29.081375 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:29.186677 kernel: FS-Cache: Loaded Feb 13 20:53:29.278228 kernel: RPC: Registered named UNIX socket transport module. Feb 13 20:53:29.278426 kernel: RPC: Registered udp transport module. Feb 13 20:53:29.278474 kernel: RPC: Registered tcp transport module. Feb 13 20:53:29.278688 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 20:53:29.280569 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 20:53:29.585916 kernel: NFS: Registering the id_resolver key type Feb 13 20:53:29.586628 kernel: Key type id_resolver registered Feb 13 20:53:29.586669 kernel: Key type id_legacy registered Feb 13 20:53:29.634998 nfsidmap[3252]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 13 20:53:29.643353 nfsidmap[3253]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 13 20:53:29.739919 containerd[1459]: time="2025-02-13T20:53:29.739008842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39b082bb-504b-4d9f-af35-e575a5a355a9,Namespace:default,Attempt:0,}" Feb 13 20:53:29.823209 systemd-networkd[1363]: lxccdd199861ef1: Link UP Feb 13 20:53:29.837220 kernel: eth0: renamed from tmp6c0b9 Feb 13 20:53:29.840521 systemd-networkd[1363]: lxccdd199861ef1: Gained carrier Feb 13 20:53:30.081941 kubelet[1852]: E0213 20:53:30.081849 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:30.129095 containerd[1459]: time="2025-02-13T20:53:30.128493828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:53:30.129095 containerd[1459]: time="2025-02-13T20:53:30.128591360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:53:30.129095 containerd[1459]: time="2025-02-13T20:53:30.128670699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:30.129095 containerd[1459]: time="2025-02-13T20:53:30.128760837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:30.159838 systemd[1]: Started cri-containerd-6c0b97376eb995bcd946b9840e61421a10778aa226de5b50d9f4ddafe4ccd76d.scope - libcontainer container 6c0b97376eb995bcd946b9840e61421a10778aa226de5b50d9f4ddafe4ccd76d. Feb 13 20:53:30.211434 containerd[1459]: time="2025-02-13T20:53:30.211387272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39b082bb-504b-4d9f-af35-e575a5a355a9,Namespace:default,Attempt:0,} returns sandbox id \"6c0b97376eb995bcd946b9840e61421a10778aa226de5b50d9f4ddafe4ccd76d\"" Feb 13 20:53:30.215446 containerd[1459]: time="2025-02-13T20:53:30.215357314Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:53:30.652122 containerd[1459]: time="2025-02-13T20:53:30.651859695Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:53:30.653774 containerd[1459]: time="2025-02-13T20:53:30.653647373Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 20:53:30.662595 containerd[1459]: time="2025-02-13T20:53:30.662314614Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 446.907698ms" Feb 13 20:53:30.662595 containerd[1459]: time="2025-02-13T20:53:30.662388703Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:53:30.667118 containerd[1459]: time="2025-02-13T20:53:30.666558428Z" level=info msg="CreateContainer within sandbox \"6c0b97376eb995bcd946b9840e61421a10778aa226de5b50d9f4ddafe4ccd76d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 20:53:30.702263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223430409.mount: Deactivated successfully. Feb 13 20:53:30.707539 containerd[1459]: time="2025-02-13T20:53:30.707298420Z" level=info msg="CreateContainer within sandbox \"6c0b97376eb995bcd946b9840e61421a10778aa226de5b50d9f4ddafe4ccd76d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"235ca145ef8fde413850dbbafb6d66fef0ac7cf4a39a77206e047fbca4ec082d\"" Feb 13 20:53:30.709001 containerd[1459]: time="2025-02-13T20:53:30.708889652Z" level=info msg="StartContainer for \"235ca145ef8fde413850dbbafb6d66fef0ac7cf4a39a77206e047fbca4ec082d\"" Feb 13 20:53:30.764858 systemd[1]: Started cri-containerd-235ca145ef8fde413850dbbafb6d66fef0ac7cf4a39a77206e047fbca4ec082d.scope - libcontainer container 235ca145ef8fde413850dbbafb6d66fef0ac7cf4a39a77206e047fbca4ec082d. Feb 13 20:53:30.804351 containerd[1459]: time="2025-02-13T20:53:30.804298202Z" level=info msg="StartContainer for \"235ca145ef8fde413850dbbafb6d66fef0ac7cf4a39a77206e047fbca4ec082d\" returns successfully" Feb 13 20:53:31.083112 kubelet[1852]: E0213 20:53:31.082979 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:31.631165 kubelet[1852]: I0213 20:53:31.631069 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.181841204 podStartE2EDuration="18.631037464s" podCreationTimestamp="2025-02-13 20:53:13 +0000 UTC" firstStartedPulling="2025-02-13 20:53:30.214776439 +0000 UTC m=+67.066625584" lastFinishedPulling="2025-02-13 20:53:30.66397266 +0000 UTC m=+67.515821844" observedRunningTime="2025-02-13 20:53:31.629914818 +0000 UTC m=+68.481764012" watchObservedRunningTime="2025-02-13 20:53:31.631037464 +0000 UTC m=+68.482886639" Feb 13 20:53:31.679121 systemd-networkd[1363]: lxccdd199861ef1: Gained IPv6LL Feb 13 20:53:32.084179 kubelet[1852]: E0213 20:53:32.084082 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:33.084916 kubelet[1852]: E0213 20:53:33.084762 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:34.085532 kubelet[1852]: E0213 20:53:34.085403 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:35.086338 kubelet[1852]: E0213 20:53:35.086111 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:36.086985 kubelet[1852]: E0213 20:53:36.086912 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:37.087567 kubelet[1852]: E0213 20:53:37.087496 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:38.088717 kubelet[1852]: E0213 20:53:38.088595 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:39.089022 kubelet[1852]: E0213 20:53:39.088939 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:40.090224 kubelet[1852]: E0213 20:53:40.090115 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:40.677944 systemd[1]: run-containerd-runc-k8s.io-82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620-runc.RUJKsW.mount: Deactivated successfully. Feb 13 20:53:40.690746 containerd[1459]: time="2025-02-13T20:53:40.690504978Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:53:40.700359 containerd[1459]: time="2025-02-13T20:53:40.700233929Z" level=info msg="StopContainer for \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\" with timeout 2 (s)" Feb 13 20:53:40.700861 containerd[1459]: time="2025-02-13T20:53:40.700772247Z" level=info msg="Stop container \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\" with signal terminated" Feb 13 20:53:40.717026 systemd-networkd[1363]: lxc_health: Link DOWN Feb 13 20:53:40.717038 systemd-networkd[1363]: lxc_health: Lost carrier Feb 13 20:53:40.737013 systemd[1]: cri-containerd-82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620.scope: Deactivated successfully. Feb 13 20:53:40.738536 systemd[1]: cri-containerd-82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620.scope: Consumed 9.225s CPU time. Feb 13 20:53:40.761874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620-rootfs.mount: Deactivated successfully. Feb 13 20:53:41.091112 kubelet[1852]: E0213 20:53:41.090856 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:41.656407 containerd[1459]: time="2025-02-13T20:53:41.656200981Z" level=info msg="shim disconnected" id=82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620 namespace=k8s.io Feb 13 20:53:41.656407 containerd[1459]: time="2025-02-13T20:53:41.656394714Z" level=warning msg="cleaning up after shim disconnected" id=82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620 namespace=k8s.io Feb 13 20:53:41.657301 containerd[1459]: time="2025-02-13T20:53:41.656428597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:53:41.697681 containerd[1459]: time="2025-02-13T20:53:41.697370036Z" level=info msg="StopContainer for \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\" returns successfully" Feb 13 20:53:41.703881 containerd[1459]: time="2025-02-13T20:53:41.700543351Z" level=info msg="StopPodSandbox for \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\"" Feb 13 20:53:41.703881 containerd[1459]: time="2025-02-13T20:53:41.700756080Z" level=info msg="Container to stop \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:53:41.703881 containerd[1459]: time="2025-02-13T20:53:41.700831029Z" level=info msg="Container to stop \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:53:41.703881 containerd[1459]: time="2025-02-13T20:53:41.700857429Z" level=info msg="Container to stop \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:53:41.703881 containerd[1459]: time="2025-02-13T20:53:41.700923543Z" level=info msg="Container to stop \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:53:41.703881 containerd[1459]: time="2025-02-13T20:53:41.700949642Z" level=info msg="Container to stop \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:53:41.709943 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48-shm.mount: Deactivated successfully. Feb 13 20:53:41.721116 systemd[1]: cri-containerd-78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48.scope: Deactivated successfully. Feb 13 20:53:41.754197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48-rootfs.mount: Deactivated successfully. Feb 13 20:53:41.760926 containerd[1459]: time="2025-02-13T20:53:41.760821764Z" level=info msg="shim disconnected" id=78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48 namespace=k8s.io Feb 13 20:53:41.761035 containerd[1459]: time="2025-02-13T20:53:41.760935748Z" level=warning msg="cleaning up after shim disconnected" id=78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48 namespace=k8s.io Feb 13 20:53:41.761035 containerd[1459]: time="2025-02-13T20:53:41.760959663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:53:41.778008 containerd[1459]: time="2025-02-13T20:53:41.777947307Z" level=info msg="TearDown network for sandbox \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" successfully" Feb 13 20:53:41.778008 containerd[1459]: time="2025-02-13T20:53:41.777993694Z" level=info msg="StopPodSandbox for \"78dae0e3c2e81770b1f1c25351073e3fb449ba8abe45a9cd9ce4a35539e00b48\" returns successfully" Feb 13 20:53:41.919950 kubelet[1852]: I0213 20:53:41.919877 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c744909b-22b6-420b-8e32-54ee72b34026-cilium-config-path\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920206 kubelet[1852]: I0213 20:53:41.919957 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-hostproc\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920206 kubelet[1852]: I0213 20:53:41.920006 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-cgroup\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920206 kubelet[1852]: I0213 20:53:41.920052 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c744909b-22b6-420b-8e32-54ee72b34026-clustermesh-secrets\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920206 kubelet[1852]: I0213 20:53:41.920095 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-etc-cni-netd\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920206 kubelet[1852]: I0213 20:53:41.920132 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-bpf-maps\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920206 kubelet[1852]: I0213 20:53:41.920170 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-kernel\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920549 kubelet[1852]: I0213 20:53:41.920211 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-lib-modules\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920549 kubelet[1852]: I0213 20:53:41.920250 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-xtables-lock\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920549 kubelet[1852]: I0213 20:53:41.920316 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lns\" (UniqueName: \"kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-kube-api-access-k9lns\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920549 kubelet[1852]: I0213 20:53:41.920362 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cni-path\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920549 kubelet[1852]: I0213 20:53:41.920403 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-hubble-tls\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920549 kubelet[1852]: I0213 20:53:41.920439 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-net\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920925 kubelet[1852]: I0213 20:53:41.920478 1852 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-run\") pod \"c744909b-22b6-420b-8e32-54ee72b34026\" (UID: \"c744909b-22b6-420b-8e32-54ee72b34026\") " Feb 13 20:53:41.920925 kubelet[1852]: I0213 20:53:41.920579 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.922643 kubelet[1852]: I0213 20:53:41.921119 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.922643 kubelet[1852]: I0213 20:53:41.921212 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-hostproc" (OuterVolumeSpecName: "hostproc") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.922643 kubelet[1852]: I0213 20:53:41.921251 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.923219 kubelet[1852]: I0213 20:53:41.923137 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.923316 kubelet[1852]: I0213 20:53:41.923235 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.923949 kubelet[1852]: I0213 20:53:41.923852 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cni-path" (OuterVolumeSpecName: "cni-path") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.924818 kubelet[1852]: I0213 20:53:41.924571 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.925228 kubelet[1852]: I0213 20:53:41.925136 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.925311 kubelet[1852]: I0213 20:53:41.925244 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:53:41.932732 systemd[1]: var-lib-kubelet-pods-c744909b\x2d22b6\x2d420b\x2d8e32\x2d54ee72b34026-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 20:53:41.937057 kubelet[1852]: I0213 20:53:41.936991 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c744909b-22b6-420b-8e32-54ee72b34026-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:53:41.937239 kubelet[1852]: I0213 20:53:41.937189 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:53:41.939579 systemd[1]: var-lib-kubelet-pods-c744909b\x2d22b6\x2d420b\x2d8e32\x2d54ee72b34026-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 20:53:41.943417 kubelet[1852]: I0213 20:53:41.943246 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c744909b-22b6-420b-8e32-54ee72b34026-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:53:41.946790 kubelet[1852]: I0213 20:53:41.945966 1852 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-kube-api-access-k9lns" (OuterVolumeSpecName: "kube-api-access-k9lns") pod "c744909b-22b6-420b-8e32-54ee72b34026" (UID: "c744909b-22b6-420b-8e32-54ee72b34026"). InnerVolumeSpecName "kube-api-access-k9lns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:53:41.947530 systemd[1]: var-lib-kubelet-pods-c744909b\x2d22b6\x2d420b\x2d8e32\x2d54ee72b34026-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9lns.mount: Deactivated successfully. Feb 13 20:53:42.021055 kubelet[1852]: I0213 20:53:42.020866 1852 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c744909b-22b6-420b-8e32-54ee72b34026-clustermesh-secrets\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021055 kubelet[1852]: I0213 20:53:42.020958 1852 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-etc-cni-netd\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021055 kubelet[1852]: I0213 20:53:42.020982 1852 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-bpf-maps\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021055 kubelet[1852]: I0213 20:53:42.021005 1852 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-kernel\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021055 kubelet[1852]: I0213 20:53:42.021031 1852 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-cgroup\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021055 kubelet[1852]: I0213 20:53:42.021055 1852 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-lib-modules\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021055 kubelet[1852]: I0213 20:53:42.021077 1852 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k9lns\" (UniqueName: \"kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-kube-api-access-k9lns\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021780 kubelet[1852]: I0213 20:53:42.021098 1852 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cni-path\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021780 kubelet[1852]: I0213 20:53:42.021120 1852 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c744909b-22b6-420b-8e32-54ee72b34026-hubble-tls\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021780 kubelet[1852]: I0213 20:53:42.021141 1852 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-host-proc-sys-net\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021780 kubelet[1852]: I0213 20:53:42.021162 1852 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-cilium-run\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021780 kubelet[1852]: I0213 20:53:42.021182 1852 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-xtables-lock\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021780 kubelet[1852]: I0213 20:53:42.021203 1852 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c744909b-22b6-420b-8e32-54ee72b34026-hostproc\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.021780 kubelet[1852]: I0213 20:53:42.021223 1852 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c744909b-22b6-420b-8e32-54ee72b34026-cilium-config-path\") on node \"172.24.4.171\" DevicePath \"\"" Feb 13 20:53:42.091765 kubelet[1852]: E0213 20:53:42.091663 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:42.188973 systemd[1]: Removed slice kubepods-burstable-podc744909b_22b6_420b_8e32_54ee72b34026.slice - libcontainer container kubepods-burstable-podc744909b_22b6_420b_8e32_54ee72b34026.slice. Feb 13 20:53:42.189555 systemd[1]: kubepods-burstable-podc744909b_22b6_420b_8e32_54ee72b34026.slice: Consumed 9.340s CPU time. Feb 13 20:53:42.655479 kubelet[1852]: I0213 20:53:42.655313 1852 scope.go:117] "RemoveContainer" containerID="82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620" Feb 13 20:53:42.662766 containerd[1459]: time="2025-02-13T20:53:42.661365558Z" level=info msg="RemoveContainer for \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\"" Feb 13 20:53:42.668151 containerd[1459]: time="2025-02-13T20:53:42.668088315Z" level=info msg="RemoveContainer for \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\" returns successfully" Feb 13 20:53:42.669024 kubelet[1852]: I0213 20:53:42.668978 1852 scope.go:117] "RemoveContainer" containerID="69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2" Feb 13 20:53:42.673577 containerd[1459]: time="2025-02-13T20:53:42.672907616Z" level=info msg="RemoveContainer for \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\"" Feb 13 20:53:42.678670 containerd[1459]: time="2025-02-13T20:53:42.678527155Z" level=info msg="RemoveContainer for \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\" returns successfully" Feb 13 20:53:42.679319 kubelet[1852]: I0213 20:53:42.679200 1852 scope.go:117] "RemoveContainer" containerID="b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034" Feb 13 20:53:42.681760 containerd[1459]: time="2025-02-13T20:53:42.681705371Z" level=info msg="RemoveContainer for \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\"" Feb 13 20:53:42.687257 containerd[1459]: time="2025-02-13T20:53:42.687146647Z" level=info msg="RemoveContainer for \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\" returns successfully" Feb 13 20:53:42.687743 kubelet[1852]: I0213 20:53:42.687435 1852 scope.go:117] "RemoveContainer" containerID="50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305" Feb 13 20:53:42.691516 containerd[1459]: time="2025-02-13T20:53:42.691440022Z" level=info msg="RemoveContainer for \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\"" Feb 13 20:53:42.699416 containerd[1459]: time="2025-02-13T20:53:42.699239948Z" level=info msg="RemoveContainer for \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\" returns successfully" Feb 13 20:53:42.700166 kubelet[1852]: I0213 20:53:42.699955 1852 scope.go:117] "RemoveContainer" containerID="794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82" Feb 13 20:53:42.703886 containerd[1459]: time="2025-02-13T20:53:42.703091855Z" level=info msg="RemoveContainer for \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\"" Feb 13 20:53:42.709022 containerd[1459]: time="2025-02-13T20:53:42.708861978Z" level=info msg="RemoveContainer for \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\" returns successfully" Feb 13 20:53:42.709566 kubelet[1852]: I0213 20:53:42.709457 1852 scope.go:117] "RemoveContainer" containerID="82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620" Feb 13 20:53:42.710057 containerd[1459]: time="2025-02-13T20:53:42.709982007Z" level=error msg="ContainerStatus for \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\": not found" Feb 13 20:53:42.710501 kubelet[1852]: E0213 20:53:42.710444 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\": not found" containerID="82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620" Feb 13 20:53:42.710769 kubelet[1852]: I0213 20:53:42.710512 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620"} err="failed to get container status \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\": rpc error: code = NotFound desc = an error occurred when try to find container \"82143133dafaaee846cacf6593e9d008eb85bdb987af9d0d727331a9143a9620\": not found" Feb 13 20:53:42.710899 kubelet[1852]: I0213 20:53:42.710766 1852 scope.go:117] "RemoveContainer" containerID="69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2" Feb 13 20:53:42.711728 containerd[1459]: time="2025-02-13T20:53:42.711141339Z" level=error msg="ContainerStatus for \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\": not found" Feb 13 20:53:42.712508 kubelet[1852]: E0213 20:53:42.712009 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\": not found" containerID="69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2" Feb 13 20:53:42.712508 kubelet[1852]: I0213 20:53:42.712060 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2"} err="failed to get container status \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\": rpc error: code = NotFound desc = an error occurred when try to find container \"69066cd60a4cd238cb54ad6e53c4d42f8cae664f3f0e698ac964e6d736363ed2\": not found" Feb 13 20:53:42.712508 kubelet[1852]: I0213 20:53:42.712096 1852 scope.go:117] "RemoveContainer" containerID="b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034" Feb 13 20:53:42.712825 containerd[1459]: time="2025-02-13T20:53:42.712391151Z" level=error msg="ContainerStatus for \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\": not found" Feb 13 20:53:42.713565 kubelet[1852]: E0213 20:53:42.713039 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\": not found" containerID="b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034" Feb 13 20:53:42.713565 kubelet[1852]: I0213 20:53:42.713093 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034"} err="failed to get container status \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\": rpc error: code = NotFound desc = an error occurred when try to find container \"b025fdcf3e90171936a9114559c6025c2fe6a535ebca680778dfd35592be1034\": not found" Feb 13 20:53:42.713565 kubelet[1852]: I0213 20:53:42.713129 1852 scope.go:117] "RemoveContainer" containerID="50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305" Feb 13 20:53:42.713855 containerd[1459]: time="2025-02-13T20:53:42.713432242Z" level=error msg="ContainerStatus for \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\": not found" Feb 13 20:53:42.714457 kubelet[1852]: E0213 20:53:42.714336 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\": not found" containerID="50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305" Feb 13 20:53:42.714569 kubelet[1852]: I0213 20:53:42.714450 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305"} err="failed to get container status \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\": rpc error: code = NotFound desc = an error occurred when try to find container \"50c0b18a6347f88757e110204c508ecc3bb853f583a6f3d6f8acbf65f6900305\": not found" Feb 13 20:53:42.714569 kubelet[1852]: I0213 20:53:42.714525 1852 scope.go:117] "RemoveContainer" containerID="794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82" Feb 13 20:53:42.715204 containerd[1459]: time="2025-02-13T20:53:42.714946309Z" level=error msg="ContainerStatus for \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\": not found" Feb 13 20:53:42.715509 kubelet[1852]: E0213 20:53:42.715396 1852 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\": not found" containerID="794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82" Feb 13 20:53:42.715509 kubelet[1852]: I0213 20:53:42.715446 1852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82"} err="failed to get container status \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\": rpc error: code = NotFound desc = an error occurred when try to find container \"794487a1ee1a84d0cf106c38aef396eacca0b25231c80c112d9c146cb1028b82\": not found" Feb 13 20:53:43.092244 kubelet[1852]: E0213 20:53:43.092164 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:44.026307 kubelet[1852]: E0213 20:53:44.026221 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:44.093646 kubelet[1852]: E0213 20:53:44.093404 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:44.180984 kubelet[1852]: I0213 20:53:44.180923 1852 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c744909b-22b6-420b-8e32-54ee72b34026" path="/var/lib/kubelet/pods/c744909b-22b6-420b-8e32-54ee72b34026/volumes" Feb 13 20:53:44.200576 kubelet[1852]: E0213 20:53:44.200483 1852 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:45.093873 kubelet[1852]: E0213 20:53:45.093739 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:45.321663 kubelet[1852]: E0213 20:53:45.320437 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c744909b-22b6-420b-8e32-54ee72b34026" containerName="cilium-agent" Feb 13 20:53:45.321663 kubelet[1852]: E0213 20:53:45.320496 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c744909b-22b6-420b-8e32-54ee72b34026" containerName="mount-cgroup" Feb 13 20:53:45.321663 kubelet[1852]: E0213 20:53:45.320513 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c744909b-22b6-420b-8e32-54ee72b34026" containerName="apply-sysctl-overwrites" Feb 13 20:53:45.321663 kubelet[1852]: E0213 20:53:45.320534 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c744909b-22b6-420b-8e32-54ee72b34026" containerName="mount-bpf-fs" Feb 13 20:53:45.321663 kubelet[1852]: E0213 20:53:45.320548 1852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c744909b-22b6-420b-8e32-54ee72b34026" containerName="clean-cilium-state" Feb 13 20:53:45.321663 kubelet[1852]: I0213 20:53:45.320591 1852 memory_manager.go:354] "RemoveStaleState removing state" podUID="c744909b-22b6-420b-8e32-54ee72b34026" containerName="cilium-agent" Feb 13 20:53:45.338847 systemd[1]: Created slice kubepods-burstable-pod3eb7710b_4c8e_4d5b_ba6b_31a4591e9143.slice - libcontainer container kubepods-burstable-pod3eb7710b_4c8e_4d5b_ba6b_31a4591e9143.slice. Feb 13 20:53:45.358510 systemd[1]: Created slice kubepods-besteffort-pod64fb5fba_ae85_44a0_a532_9600fc4b5b61.slice - libcontainer container kubepods-besteffort-pod64fb5fba_ae85_44a0_a532_9600fc4b5b61.slice. Feb 13 20:53:45.445510 kubelet[1852]: I0213 20:53:45.445414 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-cilium-ipsec-secrets\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.445510 kubelet[1852]: I0213 20:53:45.445505 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-lib-modules\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.445863 kubelet[1852]: I0213 20:53:45.445551 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-hubble-tls\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.445863 kubelet[1852]: I0213 20:53:45.445597 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-cilium-config-path\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.445863 kubelet[1852]: I0213 20:53:45.445683 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-hostproc\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.445863 kubelet[1852]: I0213 20:53:45.445722 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-cni-path\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.445863 kubelet[1852]: I0213 20:53:45.445760 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-etc-cni-netd\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.445863 kubelet[1852]: I0213 20:53:45.445801 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-clustermesh-secrets\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.446326 kubelet[1852]: I0213 20:53:45.445842 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-bpf-maps\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.446326 kubelet[1852]: I0213 20:53:45.445890 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwjl\" (UniqueName: \"kubernetes.io/projected/64fb5fba-ae85-44a0-a532-9600fc4b5b61-kube-api-access-jtwjl\") pod \"cilium-operator-5d85765b45-8kjnn\" (UID: \"64fb5fba-ae85-44a0-a532-9600fc4b5b61\") " pod="kube-system/cilium-operator-5d85765b45-8kjnn" Feb 13 20:53:45.446326 kubelet[1852]: I0213 20:53:45.445934 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-cilium-run\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.446326 kubelet[1852]: I0213 20:53:45.445973 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-cilium-cgroup\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.446326 kubelet[1852]: I0213 20:53:45.446015 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-xtables-lock\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.446677 kubelet[1852]: I0213 20:53:45.446054 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-host-proc-sys-net\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.446677 kubelet[1852]: I0213 20:53:45.446097 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64fb5fba-ae85-44a0-a532-9600fc4b5b61-cilium-config-path\") pod \"cilium-operator-5d85765b45-8kjnn\" (UID: \"64fb5fba-ae85-44a0-a532-9600fc4b5b61\") " pod="kube-system/cilium-operator-5d85765b45-8kjnn" Feb 13 20:53:45.446677 kubelet[1852]: I0213 20:53:45.446139 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlf5k\" (UniqueName: \"kubernetes.io/projected/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-kube-api-access-jlf5k\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.446677 kubelet[1852]: I0213 20:53:45.446178 1852 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3eb7710b-4c8e-4d5b-ba6b-31a4591e9143-host-proc-sys-kernel\") pod \"cilium-5tnv9\" (UID: \"3eb7710b-4c8e-4d5b-ba6b-31a4591e9143\") " pod="kube-system/cilium-5tnv9" Feb 13 20:53:45.656574 containerd[1459]: time="2025-02-13T20:53:45.655185963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5tnv9,Uid:3eb7710b-4c8e-4d5b-ba6b-31a4591e9143,Namespace:kube-system,Attempt:0,}" Feb 13 20:53:45.667267 containerd[1459]: time="2025-02-13T20:53:45.667078634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8kjnn,Uid:64fb5fba-ae85-44a0-a532-9600fc4b5b61,Namespace:kube-system,Attempt:0,}" Feb 13 20:53:45.668437 kubelet[1852]: I0213 20:53:45.668312 1852 setters.go:600] "Node became not ready" node="172.24.4.171" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T20:53:45Z","lastTransitionTime":"2025-02-13T20:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 20:53:45.731895 containerd[1459]: time="2025-02-13T20:53:45.731741438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:53:45.731895 containerd[1459]: time="2025-02-13T20:53:45.731808704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:53:45.731895 containerd[1459]: time="2025-02-13T20:53:45.731824704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:45.732536 containerd[1459]: time="2025-02-13T20:53:45.731911747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:45.742469 containerd[1459]: time="2025-02-13T20:53:45.742170254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:53:45.742469 containerd[1459]: time="2025-02-13T20:53:45.742325826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:53:45.742469 containerd[1459]: time="2025-02-13T20:53:45.742364689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:45.742949 containerd[1459]: time="2025-02-13T20:53:45.742892449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:53:45.762797 systemd[1]: Started cri-containerd-682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545.scope - libcontainer container 682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545. Feb 13 20:53:45.769320 systemd[1]: Started cri-containerd-4be3ca24c28595b370b52b2cb8e914dbdabc79e261051c9bb0a2f0878076bd74.scope - libcontainer container 4be3ca24c28595b370b52b2cb8e914dbdabc79e261051c9bb0a2f0878076bd74. Feb 13 20:53:45.799043 containerd[1459]: time="2025-02-13T20:53:45.798987307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5tnv9,Uid:3eb7710b-4c8e-4d5b-ba6b-31a4591e9143,Namespace:kube-system,Attempt:0,} returns sandbox id \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\"" Feb 13 20:53:45.803047 containerd[1459]: time="2025-02-13T20:53:45.803006322Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:53:45.821268 containerd[1459]: time="2025-02-13T20:53:45.821220367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8kjnn,Uid:64fb5fba-ae85-44a0-a532-9600fc4b5b61,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be3ca24c28595b370b52b2cb8e914dbdabc79e261051c9bb0a2f0878076bd74\"" Feb 13 20:53:45.823524 containerd[1459]: time="2025-02-13T20:53:45.823494652Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 20:53:45.825525 containerd[1459]: time="2025-02-13T20:53:45.825487789Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3\"" Feb 13 20:53:45.826737 containerd[1459]: time="2025-02-13T20:53:45.826287329Z" level=info msg="StartContainer for \"a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3\"" Feb 13 20:53:45.856808 systemd[1]: Started cri-containerd-a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3.scope - libcontainer container a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3. Feb 13 20:53:45.895105 containerd[1459]: time="2025-02-13T20:53:45.895048268Z" level=info msg="StartContainer for \"a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3\" returns successfully" Feb 13 20:53:45.898367 systemd[1]: cri-containerd-a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3.scope: Deactivated successfully. Feb 13 20:53:45.940849 containerd[1459]: time="2025-02-13T20:53:45.940762540Z" level=info msg="shim disconnected" id=a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3 namespace=k8s.io Feb 13 20:53:45.940849 containerd[1459]: time="2025-02-13T20:53:45.940844053Z" level=warning msg="cleaning up after shim disconnected" id=a3c63c67ebbad25de72f73708f7cee6f3852a079aebd5dfb73104e71b2db0fd3 namespace=k8s.io Feb 13 20:53:45.940849 containerd[1459]: time="2025-02-13T20:53:45.940855174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:53:46.094287 kubelet[1852]: E0213 20:53:46.094182 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:46.683658 containerd[1459]: time="2025-02-13T20:53:46.683521601Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:53:46.715910 containerd[1459]: time="2025-02-13T20:53:46.715696548Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620\"" Feb 13 20:53:46.717129 containerd[1459]: time="2025-02-13T20:53:46.717039837Z" level=info msg="StartContainer for \"62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620\"" Feb 13 20:53:46.784742 systemd[1]: Started cri-containerd-62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620.scope - libcontainer container 62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620. Feb 13 20:53:46.816986 containerd[1459]: time="2025-02-13T20:53:46.816942364Z" level=info msg="StartContainer for \"62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620\" returns successfully" Feb 13 20:53:46.820082 systemd[1]: cri-containerd-62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620.scope: Deactivated successfully. Feb 13 20:53:46.849916 containerd[1459]: time="2025-02-13T20:53:46.849711135Z" level=info msg="shim disconnected" id=62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620 namespace=k8s.io Feb 13 20:53:46.849916 containerd[1459]: time="2025-02-13T20:53:46.849768983Z" level=warning msg="cleaning up after shim disconnected" id=62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620 namespace=k8s.io Feb 13 20:53:46.849916 containerd[1459]: time="2025-02-13T20:53:46.849778421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:53:47.094974 kubelet[1852]: E0213 20:53:47.094688 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:47.558181 systemd[1]: run-containerd-runc-k8s.io-62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620-runc.EqFBlV.mount: Deactivated successfully. Feb 13 20:53:47.558368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62193be14a445076f68f597a856d27de476bf37b3412760e0ed0d00538cc3620-rootfs.mount: Deactivated successfully. Feb 13 20:53:47.684334 containerd[1459]: time="2025-02-13T20:53:47.684167134Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:53:47.719572 containerd[1459]: time="2025-02-13T20:53:47.719527190Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23\"" Feb 13 20:53:47.722567 containerd[1459]: time="2025-02-13T20:53:47.720831076Z" level=info msg="StartContainer for \"0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23\"" Feb 13 20:53:47.754802 systemd[1]: run-containerd-runc-k8s.io-0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23-runc.0WvFjP.mount: Deactivated successfully. Feb 13 20:53:47.766289 systemd[1]: Started cri-containerd-0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23.scope - libcontainer container 0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23. Feb 13 20:53:47.814270 systemd[1]: cri-containerd-0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23.scope: Deactivated successfully. Feb 13 20:53:47.816016 containerd[1459]: time="2025-02-13T20:53:47.815166550Z" level=info msg="StartContainer for \"0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23\" returns successfully" Feb 13 20:53:48.005132 containerd[1459]: time="2025-02-13T20:53:48.005009919Z" level=info msg="shim disconnected" id=0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23 namespace=k8s.io Feb 13 20:53:48.005132 containerd[1459]: time="2025-02-13T20:53:48.005117681Z" level=warning msg="cleaning up after shim disconnected" id=0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23 namespace=k8s.io Feb 13 20:53:48.005132 containerd[1459]: time="2025-02-13T20:53:48.005140193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:53:48.095269 kubelet[1852]: E0213 20:53:48.095075 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:48.329216 containerd[1459]: time="2025-02-13T20:53:48.328336426Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:53:48.330619 containerd[1459]: time="2025-02-13T20:53:48.330558636Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 20:53:48.332257 containerd[1459]: time="2025-02-13T20:53:48.332229210Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:53:48.334004 containerd[1459]: time="2025-02-13T20:53:48.333559076Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.510028566s" Feb 13 20:53:48.334099 containerd[1459]: time="2025-02-13T20:53:48.334080784Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 20:53:48.336850 containerd[1459]: time="2025-02-13T20:53:48.336710508Z" level=info msg="CreateContainer within sandbox \"4be3ca24c28595b370b52b2cb8e914dbdabc79e261051c9bb0a2f0878076bd74\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 20:53:48.359878 containerd[1459]: time="2025-02-13T20:53:48.359755270Z" level=info msg="CreateContainer within sandbox \"4be3ca24c28595b370b52b2cb8e914dbdabc79e261051c9bb0a2f0878076bd74\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"165719ffa8dfba38aebdbcf629129942883c5c9b5be233bdd9e6e1e30657c3ca\"" Feb 13 20:53:48.361116 containerd[1459]: time="2025-02-13T20:53:48.361065097Z" level=info msg="StartContainer for \"165719ffa8dfba38aebdbcf629129942883c5c9b5be233bdd9e6e1e30657c3ca\"" Feb 13 20:53:48.405768 systemd[1]: Started cri-containerd-165719ffa8dfba38aebdbcf629129942883c5c9b5be233bdd9e6e1e30657c3ca.scope - libcontainer container 165719ffa8dfba38aebdbcf629129942883c5c9b5be233bdd9e6e1e30657c3ca. Feb 13 20:53:48.444155 containerd[1459]: time="2025-02-13T20:53:48.444112027Z" level=info msg="StartContainer for \"165719ffa8dfba38aebdbcf629129942883c5c9b5be233bdd9e6e1e30657c3ca\" returns successfully" Feb 13 20:53:48.559795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c764c2742b62eed6a6f7f84facdbde851e1b65c2f0698f0d2074ca4f7362a23-rootfs.mount: Deactivated successfully. Feb 13 20:53:48.701131 containerd[1459]: time="2025-02-13T20:53:48.700979130Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:53:48.735676 kubelet[1852]: I0213 20:53:48.733817 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8kjnn" podStartSLOduration=1.221429145 podStartE2EDuration="3.733735544s" podCreationTimestamp="2025-02-13 20:53:45 +0000 UTC" firstStartedPulling="2025-02-13 20:53:45.822947306 +0000 UTC m=+82.674796440" lastFinishedPulling="2025-02-13 20:53:48.335253705 +0000 UTC m=+85.187102839" observedRunningTime="2025-02-13 20:53:48.706824358 +0000 UTC m=+85.558673592" watchObservedRunningTime="2025-02-13 20:53:48.733735544 +0000 UTC m=+85.585584728" Feb 13 20:53:48.743278 containerd[1459]: time="2025-02-13T20:53:48.743179916Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387\"" Feb 13 20:53:48.745263 containerd[1459]: time="2025-02-13T20:53:48.744375279Z" level=info msg="StartContainer for \"62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387\"" Feb 13 20:53:48.794763 systemd[1]: Started cri-containerd-62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387.scope - libcontainer container 62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387. Feb 13 20:53:48.821524 systemd[1]: cri-containerd-62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387.scope: Deactivated successfully. Feb 13 20:53:48.826362 containerd[1459]: time="2025-02-13T20:53:48.826153236Z" level=info msg="StartContainer for \"62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387\" returns successfully" Feb 13 20:53:49.153150 kubelet[1852]: E0213 20:53:49.095596 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:49.173551 containerd[1459]: time="2025-02-13T20:53:49.173249718Z" level=info msg="shim disconnected" id=62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387 namespace=k8s.io Feb 13 20:53:49.174501 containerd[1459]: time="2025-02-13T20:53:49.174007460Z" level=warning msg="cleaning up after shim disconnected" id=62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387 namespace=k8s.io Feb 13 20:53:49.174501 containerd[1459]: time="2025-02-13T20:53:49.174051382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:53:49.202744 kubelet[1852]: E0213 20:53:49.202380 1852 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:49.559340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62e8623ed4cd9dcf3d9b1460577823908ad64e69c90947ed362cbefe14410387-rootfs.mount: Deactivated successfully. Feb 13 20:53:49.709679 containerd[1459]: time="2025-02-13T20:53:49.709532359Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:53:49.754049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount554789200.mount: Deactivated successfully. Feb 13 20:53:49.757005 containerd[1459]: time="2025-02-13T20:53:49.756751930Z" level=info msg="CreateContainer within sandbox \"682853046e7477d479d115a045226e4d457565ac4fb6b53a0d679574b3201545\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e51653240472112d00dc31562eda29579aa901d15b14851b1551866d89a2947\"" Feb 13 20:53:49.759984 containerd[1459]: time="2025-02-13T20:53:49.759111538Z" level=info msg="StartContainer for \"6e51653240472112d00dc31562eda29579aa901d15b14851b1551866d89a2947\"" Feb 13 20:53:49.826858 systemd[1]: Started cri-containerd-6e51653240472112d00dc31562eda29579aa901d15b14851b1551866d89a2947.scope - libcontainer container 6e51653240472112d00dc31562eda29579aa901d15b14851b1551866d89a2947. Feb 13 20:53:49.862313 containerd[1459]: time="2025-02-13T20:53:49.862259503Z" level=info msg="StartContainer for \"6e51653240472112d00dc31562eda29579aa901d15b14851b1551866d89a2947\" returns successfully" Feb 13 20:53:50.098620 kubelet[1852]: E0213 20:53:50.096553 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:50.203712 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:53:50.249671 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 13 20:53:50.755655 kubelet[1852]: I0213 20:53:50.755515 1852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5tnv9" podStartSLOduration=5.754835724 podStartE2EDuration="5.754835724s" podCreationTimestamp="2025-02-13 20:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:53:50.753883347 +0000 UTC m=+87.605732541" watchObservedRunningTime="2025-02-13 20:53:50.754835724 +0000 UTC m=+87.606684908" Feb 13 20:53:51.097668 kubelet[1852]: E0213 20:53:51.097413 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:52.098444 kubelet[1852]: E0213 20:53:52.098358 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:53.099313 kubelet[1852]: E0213 20:53:53.099244 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:53.427502 systemd-networkd[1363]: lxc_health: Link UP Feb 13 20:53:53.437646 systemd-networkd[1363]: lxc_health: Gained carrier Feb 13 20:53:54.031824 systemd[1]: run-containerd-runc-k8s.io-6e51653240472112d00dc31562eda29579aa901d15b14851b1551866d89a2947-runc.izBkGr.mount: Deactivated successfully. Feb 13 20:53:54.099749 kubelet[1852]: E0213 20:53:54.099688 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:55.100159 kubelet[1852]: E0213 20:53:55.100042 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:55.489770 systemd-networkd[1363]: lxc_health: Gained IPv6LL Feb 13 20:53:56.100551 kubelet[1852]: E0213 20:53:56.100468 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:56.251146 systemd[1]: run-containerd-runc-k8s.io-6e51653240472112d00dc31562eda29579aa901d15b14851b1551866d89a2947-runc.8rFlR4.mount: Deactivated successfully. Feb 13 20:53:57.101171 kubelet[1852]: E0213 20:53:57.101098 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:58.101285 kubelet[1852]: E0213 20:53:58.101243 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:53:59.101513 kubelet[1852]: E0213 20:53:59.101443 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:00.102877 kubelet[1852]: E0213 20:54:00.102691 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:01.103055 kubelet[1852]: E0213 20:54:01.102961 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:02.103769 kubelet[1852]: E0213 20:54:02.103668 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:03.104338 kubelet[1852]: E0213 20:54:03.104263 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:04.026453 kubelet[1852]: E0213 20:54:04.026347 1852 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:04.105245 kubelet[1852]: E0213 20:54:04.105113 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:05.105959 kubelet[1852]: E0213 20:54:05.105881 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:54:06.106190 kubelet[1852]: E0213 20:54:06.106097 1852 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"