Jan 30 15:44:23.091699 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 15:44:23.091731 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:44:23.091743 kernel: BIOS-provided physical RAM map: Jan 30 15:44:23.091752 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 15:44:23.091761 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 15:44:23.091773 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 15:44:23.091784 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 15:44:23.091792 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 15:44:23.091801 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 15:44:23.091810 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 15:44:23.091819 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 15:44:23.091827 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 15:44:23.091836 kernel: NX (Execute Disable) protection: active Jan 30 15:44:23.091844 kernel: APIC: Static calls initialized Jan 30 15:44:23.091858 kernel: SMBIOS 3.0.0 present. Jan 30 15:44:23.091868 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 15:44:23.091877 kernel: Hypervisor detected: KVM Jan 30 15:44:23.091886 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 15:44:23.091895 kernel: kvm-clock: using sched offset of 3504318704 cycles Jan 30 15:44:23.091907 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 15:44:23.091916 kernel: tsc: Detected 1996.249 MHz processor Jan 30 15:44:23.091926 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:44:23.091952 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:44:23.091960 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 15:44:23.091969 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 15:44:23.091978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:44:23.091986 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 15:44:23.091994 kernel: ACPI: Early table checksum verification disabled Jan 30 15:44:23.092005 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 15:44:23.092014 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:23.092022 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:23.092030 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:23.092039 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 15:44:23.092047 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:23.092055 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:44:23.092064 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 15:44:23.092074 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 15:44:23.092084 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 15:44:23.092093 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 15:44:23.092101 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 15:44:23.092113 kernel: No NUMA configuration found Jan 30 15:44:23.092122 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 15:44:23.092130 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 30 15:44:23.092142 kernel: Zone ranges: Jan 30 15:44:23.092151 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:44:23.092160 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 15:44:23.092168 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:44:23.092177 kernel: Movable zone start for each node Jan 30 15:44:23.092185 kernel: Early memory node ranges Jan 30 15:44:23.092194 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 15:44:23.092203 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 15:44:23.092213 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:44:23.092222 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 15:44:23.092230 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:44:23.092240 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 15:44:23.092249 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 15:44:23.092282 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 15:44:23.092292 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 15:44:23.092301 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 15:44:23.092310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:44:23.092321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 15:44:23.092331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:44:23.092340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 15:44:23.092348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 15:44:23.092357 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:44:23.092366 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 15:44:23.092374 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 15:44:23.092384 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 15:44:23.092393 kernel: Booting paravirtualized kernel on KVM Jan 30 15:44:23.092404 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:44:23.092413 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 15:44:23.092422 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 15:44:23.092431 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 15:44:23.092439 kernel: pcpu-alloc: [0] 0 1 Jan 30 15:44:23.092448 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 15:44:23.092458 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:44:23.092467 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:44:23.092479 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 15:44:23.092487 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:44:23.092496 kernel: Fallback order for Node 0: 0 Jan 30 15:44:23.092505 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 15:44:23.092514 kernel: Policy zone: Normal Jan 30 15:44:23.092522 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:44:23.092531 kernel: software IO TLB: area num 2. Jan 30 15:44:23.092540 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 30 15:44:23.092549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 15:44:23.092559 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 15:44:23.092568 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:44:23.092577 kernel: Dynamic Preempt: voluntary Jan 30 15:44:23.092586 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:44:23.092598 kernel: rcu: RCU event tracing is enabled. Jan 30 15:44:23.092608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 15:44:23.092617 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:44:23.092626 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:44:23.092635 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:44:23.092644 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:44:23.092655 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 15:44:23.092664 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 15:44:23.092673 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:44:23.092682 kernel: Console: colour VGA+ 80x25 Jan 30 15:44:23.092690 kernel: printk: console [tty0] enabled Jan 30 15:44:23.092699 kernel: printk: console [ttyS0] enabled Jan 30 15:44:23.092708 kernel: ACPI: Core revision 20230628 Jan 30 15:44:23.092716 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:44:23.092725 kernel: x2apic enabled Jan 30 15:44:23.092736 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 15:44:23.092745 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 15:44:23.092753 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 15:44:23.092762 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 15:44:23.092771 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 15:44:23.092780 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 15:44:23.092789 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:44:23.092797 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 15:44:23.092806 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:44:23.092818 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 15:44:23.092826 kernel: Speculative Store Bypass: Vulnerable Jan 30 15:44:23.092835 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 15:44:23.092844 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:44:23.092859 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:44:23.092872 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:44:23.092881 kernel: landlock: Up and running. Jan 30 15:44:23.092890 kernel: SELinux: Initializing. Jan 30 15:44:23.092899 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:44:23.092909 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:44:23.092918 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 15:44:23.092930 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:44:23.092939 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:44:23.092948 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:44:23.092959 kernel: Performance Events: AMD PMU driver. Jan 30 15:44:23.092968 kernel: ... version: 0 Jan 30 15:44:23.092979 kernel: ... bit width: 48 Jan 30 15:44:23.092988 kernel: ... generic registers: 4 Jan 30 15:44:23.092997 kernel: ... value mask: 0000ffffffffffff Jan 30 15:44:23.093006 kernel: ... max period: 00007fffffffffff Jan 30 15:44:23.093015 kernel: ... fixed-purpose events: 0 Jan 30 15:44:23.093024 kernel: ... event mask: 000000000000000f Jan 30 15:44:23.093034 kernel: signal: max sigframe size: 1440 Jan 30 15:44:23.093043 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:44:23.093052 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:44:23.093063 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:44:23.093072 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:44:23.093081 kernel: .... node #0, CPUs: #1 Jan 30 15:44:23.093090 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 15:44:23.093100 kernel: smpboot: Max logical packages: 2 Jan 30 15:44:23.093109 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 15:44:23.093118 kernel: devtmpfs: initialized Jan 30 15:44:23.093127 kernel: x86/mm: Memory block size: 128MB Jan 30 15:44:23.093136 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:44:23.093146 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 15:44:23.093158 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:44:23.093167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:44:23.093175 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:44:23.093184 kernel: audit: type=2000 audit(1738251861.832:1): state=initialized audit_enabled=0 res=1 Jan 30 15:44:23.093192 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:44:23.093201 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:44:23.093209 kernel: cpuidle: using governor menu Jan 30 15:44:23.093217 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:44:23.093226 kernel: dca service started, version 1.12.1 Jan 30 15:44:23.093238 kernel: PCI: Using configuration type 1 for base access Jan 30 15:44:23.093246 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:44:23.093268 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:44:23.093277 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:44:23.093286 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:44:23.093294 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:44:23.093303 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:44:23.093311 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:44:23.093319 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 15:44:23.093330 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 15:44:23.093354 kernel: ACPI: Interpreter enabled Jan 30 15:44:23.093363 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 15:44:23.093371 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:44:23.093380 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:44:23.093388 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:44:23.093397 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 15:44:23.093405 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 15:44:23.093540 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:44:23.093645 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 15:44:23.093741 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 15:44:23.093755 kernel: acpiphp: Slot [3] registered Jan 30 15:44:23.093763 kernel: acpiphp: Slot [4] registered Jan 30 15:44:23.093772 kernel: acpiphp: Slot [5] registered Jan 30 15:44:23.093780 kernel: acpiphp: Slot [6] registered Jan 30 15:44:23.093789 kernel: acpiphp: Slot [7] registered Jan 30 15:44:23.093802 kernel: acpiphp: Slot [8] registered Jan 30 15:44:23.093810 kernel: acpiphp: Slot [9] registered Jan 30 15:44:23.093819 kernel: acpiphp: Slot [10] registered Jan 30 15:44:23.093827 kernel: acpiphp: Slot [11] registered Jan 30 15:44:23.093836 kernel: acpiphp: Slot [12] registered Jan 30 15:44:23.093844 kernel: acpiphp: Slot [13] registered Jan 30 15:44:23.093853 kernel: acpiphp: Slot [14] registered Jan 30 15:44:23.093861 kernel: acpiphp: Slot [15] registered Jan 30 15:44:23.093870 kernel: acpiphp: Slot [16] registered Jan 30 15:44:23.093880 kernel: acpiphp: Slot [17] registered Jan 30 15:44:23.093888 kernel: acpiphp: Slot [18] registered Jan 30 15:44:23.093897 kernel: acpiphp: Slot [19] registered Jan 30 15:44:23.093905 kernel: acpiphp: Slot [20] registered Jan 30 15:44:23.093914 kernel: acpiphp: Slot [21] registered Jan 30 15:44:23.093922 kernel: acpiphp: Slot [22] registered Jan 30 15:44:23.093931 kernel: acpiphp: Slot [23] registered Jan 30 15:44:23.093939 kernel: acpiphp: Slot [24] registered Jan 30 15:44:23.093948 kernel: acpiphp: Slot [25] registered Jan 30 15:44:23.093956 kernel: acpiphp: Slot [26] registered Jan 30 15:44:23.093967 kernel: acpiphp: Slot [27] registered Jan 30 15:44:23.093975 kernel: acpiphp: Slot [28] registered Jan 30 15:44:23.093984 kernel: acpiphp: Slot [29] registered Jan 30 15:44:23.093992 kernel: acpiphp: Slot [30] registered Jan 30 15:44:23.094001 kernel: acpiphp: Slot [31] registered Jan 30 15:44:23.094011 kernel: PCI host bridge to bus 0000:00 Jan 30 15:44:23.094114 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:44:23.094204 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:44:23.094334 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:44:23.094426 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 15:44:23.094512 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 15:44:23.094599 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 15:44:23.094719 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 15:44:23.094832 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 15:44:23.094945 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 15:44:23.095058 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 15:44:23.095161 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 15:44:23.095282 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 15:44:23.095385 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 15:44:23.095483 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 15:44:23.095599 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 15:44:23.095706 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 15:44:23.095805 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 15:44:23.095915 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 15:44:23.096043 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 15:44:23.096155 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 15:44:23.096902 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 15:44:23.097021 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 15:44:23.097135 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:44:23.097296 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 15:44:23.097415 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 15:44:23.097525 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 15:44:23.097667 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 15:44:23.097779 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 15:44:23.097898 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 15:44:23.098024 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 15:44:23.098138 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 15:44:23.098246 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 15:44:23.101022 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 15:44:23.101134 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 15:44:23.101239 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 15:44:23.101391 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 15:44:23.101514 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 15:44:23.101632 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 15:44:23.101738 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 15:44:23.101754 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 15:44:23.101765 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 15:44:23.101776 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 15:44:23.101786 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 15:44:23.101797 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 15:44:23.101812 kernel: iommu: Default domain type: Translated Jan 30 15:44:23.101823 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:44:23.101834 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:44:23.101845 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:44:23.101855 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 15:44:23.101866 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 15:44:23.101971 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 15:44:23.102084 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 15:44:23.102203 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:44:23.102219 kernel: vgaarb: loaded Jan 30 15:44:23.102230 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 15:44:23.102240 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:44:23.102269 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:44:23.102299 kernel: pnp: PnP ACPI init Jan 30 15:44:23.102415 kernel: pnp 00:03: [dma 2] Jan 30 15:44:23.102433 kernel: pnp: PnP ACPI: found 5 devices Jan 30 15:44:23.102444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:44:23.102459 kernel: NET: Registered PF_INET protocol family Jan 30 15:44:23.102470 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:44:23.102482 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 15:44:23.102492 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:44:23.102503 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 15:44:23.102514 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 15:44:23.102524 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 15:44:23.102535 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:44:23.102546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:44:23.102559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:44:23.102569 kernel: NET: Registered PF_XDP protocol family Jan 30 15:44:23.102667 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:44:23.102764 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:44:23.102860 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:44:23.102956 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 15:44:23.103051 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 15:44:23.103160 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 15:44:23.103336 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 15:44:23.103357 kernel: PCI: CLS 0 bytes, default 64 Jan 30 15:44:23.103368 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:44:23.103378 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 15:44:23.103389 kernel: Initialise system trusted keyrings Jan 30 15:44:23.103400 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 15:44:23.103411 kernel: Key type asymmetric registered Jan 30 15:44:23.103421 kernel: Asymmetric key parser 'x509' registered Jan 30 15:44:23.103436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:44:23.103447 kernel: io scheduler mq-deadline registered Jan 30 15:44:23.103457 kernel: io scheduler kyber registered Jan 30 15:44:23.103468 kernel: io scheduler bfq registered Jan 30 15:44:23.103478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:44:23.103490 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 15:44:23.103501 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 15:44:23.103512 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 15:44:23.103522 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 15:44:23.103537 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:44:23.103547 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:44:23.103558 kernel: random: crng init done Jan 30 15:44:23.103568 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 15:44:23.103578 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 15:44:23.103589 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 15:44:23.103698 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 15:44:23.103715 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 15:44:23.103812 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 15:44:23.103916 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T15:44:22 UTC (1738251862) Jan 30 15:44:23.104038 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 15:44:23.104054 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 15:44:23.104065 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:44:23.104076 kernel: Segment Routing with IPv6 Jan 30 15:44:23.104086 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:44:23.104097 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:44:23.104108 kernel: Key type dns_resolver registered Jan 30 15:44:23.104122 kernel: IPI shorthand broadcast: enabled Jan 30 15:44:23.104133 kernel: sched_clock: Marking stable (1055008897, 170596667)->(1269884027, -44278463) Jan 30 15:44:23.104144 kernel: registered taskstats version 1 Jan 30 15:44:23.104155 kernel: Loading compiled-in X.509 certificates Jan 30 15:44:23.104166 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 15:44:23.104176 kernel: Key type .fscrypt registered Jan 30 15:44:23.104186 kernel: Key type fscrypt-provisioning registered Jan 30 15:44:23.104197 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 15:44:23.104208 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:44:23.104221 kernel: ima: No architecture policies found Jan 30 15:44:23.104231 kernel: clk: Disabling unused clocks Jan 30 15:44:23.104242 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 15:44:23.104269 kernel: Write protecting the kernel read-only data: 36864k Jan 30 15:44:23.104298 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 15:44:23.104310 kernel: Run /init as init process Jan 30 15:44:23.104321 kernel: with arguments: Jan 30 15:44:23.104331 kernel: /init Jan 30 15:44:23.104341 kernel: with environment: Jan 30 15:44:23.104355 kernel: HOME=/ Jan 30 15:44:23.104365 kernel: TERM=linux Jan 30 15:44:23.104375 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:44:23.104389 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:44:23.104403 systemd[1]: Detected virtualization kvm. Jan 30 15:44:23.104414 systemd[1]: Detected architecture x86-64. Jan 30 15:44:23.104426 systemd[1]: Running in initrd. Jan 30 15:44:23.104439 systemd[1]: No hostname configured, using default hostname. Jan 30 15:44:23.104450 systemd[1]: Hostname set to . Jan 30 15:44:23.104462 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:44:23.104474 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:44:23.104485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:23.104496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:23.104508 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:44:23.104530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:44:23.104544 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:44:23.104555 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:44:23.104569 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:44:23.104581 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:44:23.104593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:23.104608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:23.104620 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:44:23.104632 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:44:23.104643 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:44:23.104655 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:44:23.104667 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:44:23.104679 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:44:23.104690 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:44:23.104704 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:44:23.104716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:23.104728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:23.104739 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:23.104751 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:44:23.104762 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:44:23.104774 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:44:23.104785 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:44:23.104797 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:44:23.104811 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:44:23.104823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:44:23.104855 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 15:44:23.104883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:23.104898 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:44:23.104911 systemd-journald[184]: Journal started Jan 30 15:44:23.104936 systemd-journald[184]: Runtime Journal (/run/log/journal/98954215d3b34f43b02161915a82e7c2) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:44:23.110482 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:44:23.110174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:23.116216 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:44:23.128320 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:44:23.133499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:44:23.135696 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 15:44:23.141299 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:23.152373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:44:23.192124 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:44:23.192148 kernel: Bridge firewalling registered Jan 30 15:44:23.171284 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 15:44:23.198453 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:23.199786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:23.212534 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:23.215615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:44:23.216511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:23.217232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:23.226926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:23.237430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:44:23.238354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:23.245636 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:44:23.260370 dracut-cmdline[220]: dracut-dracut-053 Jan 30 15:44:23.264021 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:44:23.270046 systemd-resolved[218]: Positive Trust Anchors: Jan 30 15:44:23.270064 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:44:23.270108 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:44:23.273739 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 30 15:44:23.274822 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:44:23.275528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:23.394313 kernel: SCSI subsystem initialized Jan 30 15:44:23.407354 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:44:23.420967 kernel: iscsi: registered transport (tcp) Jan 30 15:44:23.444458 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:44:23.444519 kernel: QLogic iSCSI HBA Driver Jan 30 15:44:23.512568 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:44:23.521590 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:44:23.560605 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:44:23.560674 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:44:23.561363 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:44:23.627377 kernel: raid6: sse2x4 gen() 5167 MB/s Jan 30 15:44:23.644371 kernel: raid6: sse2x2 gen() 11175 MB/s Jan 30 15:44:23.663237 kernel: raid6: sse2x1 gen() 9540 MB/s Jan 30 15:44:23.663354 kernel: raid6: using algorithm sse2x2 gen() 11175 MB/s Jan 30 15:44:23.682288 kernel: raid6: .... xor() 8895 MB/s, rmw enabled Jan 30 15:44:23.682349 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 15:44:23.705985 kernel: xor: measuring software checksum speed Jan 30 15:44:23.706069 kernel: prefetch64-sse : 17090 MB/sec Jan 30 15:44:23.706495 kernel: generic_sse : 16798 MB/sec Jan 30 15:44:23.707593 kernel: xor: using function: prefetch64-sse (17090 MB/sec) Jan 30 15:44:23.894325 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:44:23.910735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:44:23.921682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:23.936739 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 30 15:44:23.941407 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:23.949478 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:44:23.976129 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 15:44:24.016347 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:44:24.022590 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:44:24.074852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:24.082693 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:44:24.095157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:44:24.095997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:44:24.099487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:24.102713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:44:24.111905 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:44:24.137956 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:44:24.185302 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 15:44:24.225427 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 15:44:24.225562 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:44:24.225583 kernel: GPT:17805311 != 20971519 Jan 30 15:44:24.225596 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:44:24.225608 kernel: GPT:17805311 != 20971519 Jan 30 15:44:24.225620 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:44:24.225633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:24.225645 kernel: libata version 3.00 loaded. Jan 30 15:44:24.225657 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 15:44:24.231616 kernel: scsi host0: ata_piix Jan 30 15:44:24.231763 kernel: scsi host1: ata_piix Jan 30 15:44:24.231885 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 15:44:24.231901 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 15:44:24.197918 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:44:24.198055 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:24.198982 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:24.199762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:24.199890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:24.200506 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:24.211584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:24.257109 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Jan 30 15:44:24.270276 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (472) Jan 30 15:44:24.282609 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 15:44:24.308964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:24.315526 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 15:44:24.320830 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 15:44:24.322491 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 15:44:24.328713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:44:24.338476 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:44:24.341440 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:44:24.363301 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:24.363757 disk-uuid[506]: Primary Header is updated. Jan 30 15:44:24.363757 disk-uuid[506]: Secondary Entries is updated. Jan 30 15:44:24.363757 disk-uuid[506]: Secondary Header is updated. Jan 30 15:44:24.368414 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:25.387374 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:44:25.387454 disk-uuid[516]: The operation has completed successfully. Jan 30 15:44:25.445052 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:44:25.445196 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:44:25.456425 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:44:25.462361 sh[529]: Success Jan 30 15:44:25.488327 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 15:44:25.561090 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:44:25.562281 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:44:25.569413 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:44:25.590584 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 15:44:25.590634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:25.592925 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:44:25.595325 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:44:25.598403 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:44:25.615511 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:44:25.616839 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:44:25.621457 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:44:25.625447 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:44:25.636334 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:25.640040 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:25.640095 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:25.645280 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:25.658781 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:25.658447 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 15:44:25.673001 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:44:25.679552 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:44:25.750756 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:44:25.764511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:44:25.784924 systemd-networkd[713]: lo: Link UP Jan 30 15:44:25.784934 systemd-networkd[713]: lo: Gained carrier Jan 30 15:44:25.786079 systemd-networkd[713]: Enumeration completed Jan 30 15:44:25.786152 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:44:25.787120 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:25.787124 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:44:25.787390 systemd[1]: Reached target network.target - Network. Jan 30 15:44:25.790733 systemd-networkd[713]: eth0: Link UP Jan 30 15:44:25.790741 systemd-networkd[713]: eth0: Gained carrier Jan 30 15:44:25.790767 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:25.811530 systemd-networkd[713]: eth0: DHCPv4 address 172.24.4.74/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:44:25.823308 ignition[630]: Ignition 2.19.0 Jan 30 15:44:25.823321 ignition[630]: Stage: fetch-offline Jan 30 15:44:25.823366 ignition[630]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:25.823377 ignition[630]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:25.825731 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:44:25.823485 ignition[630]: parsed url from cmdline: "" Jan 30 15:44:25.823489 ignition[630]: no config URL provided Jan 30 15:44:25.823495 ignition[630]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:44:25.823503 ignition[630]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:44:25.823511 ignition[630]: failed to fetch config: resource requires networking Jan 30 15:44:25.823714 ignition[630]: Ignition finished successfully Jan 30 15:44:25.836515 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 15:44:25.849964 ignition[721]: Ignition 2.19.0 Jan 30 15:44:25.849980 ignition[721]: Stage: fetch Jan 30 15:44:25.850162 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:25.850174 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:25.850299 ignition[721]: parsed url from cmdline: "" Jan 30 15:44:25.850303 ignition[721]: no config URL provided Jan 30 15:44:25.850308 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:44:25.850322 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:44:25.850458 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 15:44:25.850488 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 15:44:25.850519 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 15:44:26.030297 ignition[721]: GET result: OK Jan 30 15:44:26.030424 ignition[721]: parsing config with SHA512: bff432329b9de5e2acd30d0d63eaa8bbd8ebedb9a88e49e2223c7eef55734a507421a36bd54f5092ca74eab8efdc920741e4c10ea7e0381b896b1c2f2d7f5dd3 Jan 30 15:44:26.038799 unknown[721]: fetched base config from "system" Jan 30 15:44:26.038830 unknown[721]: fetched base config from "system" Jan 30 15:44:26.039506 ignition[721]: fetch: fetch complete Jan 30 15:44:26.038845 unknown[721]: fetched user config from "openstack" Jan 30 15:44:26.039523 ignition[721]: fetch: fetch passed Jan 30 15:44:26.043388 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 15:44:26.039642 ignition[721]: Ignition finished successfully Jan 30 15:44:26.053616 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:44:26.098921 ignition[728]: Ignition 2.19.0 Jan 30 15:44:26.098950 ignition[728]: Stage: kargs Jan 30 15:44:26.099490 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:26.099519 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:26.101503 ignition[728]: kargs: kargs passed Jan 30 15:44:26.104897 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:44:26.101610 ignition[728]: Ignition finished successfully Jan 30 15:44:26.115664 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:44:26.151032 ignition[734]: Ignition 2.19.0 Jan 30 15:44:26.151050 ignition[734]: Stage: disks Jan 30 15:44:26.154710 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:44:26.151297 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:26.157017 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:44:26.151311 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:26.158814 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:44:26.152049 ignition[734]: disks: disks passed Jan 30 15:44:26.160694 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:44:26.152096 ignition[734]: Ignition finished successfully Jan 30 15:44:26.162655 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:44:26.164682 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:44:26.173511 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:44:26.210674 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 15:44:26.222354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:44:26.231480 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:44:26.383312 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 15:44:26.384665 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:44:26.385861 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:44:26.400454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:44:26.404541 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:44:26.408053 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 15:44:26.412388 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 15:44:26.432195 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) Jan 30 15:44:26.432225 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:26.432242 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:26.432279 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:26.433283 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:26.416185 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:44:26.416246 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:44:26.447816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:44:26.449393 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:44:26.464430 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:44:26.567614 initrd-setup-root[780]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:44:26.575314 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:44:26.581097 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:44:26.590496 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:44:26.747438 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:44:26.765504 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:44:26.770574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:44:26.786830 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:44:26.792698 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:26.838109 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:44:26.849707 ignition[869]: INFO : Ignition 2.19.0 Jan 30 15:44:26.849707 ignition[869]: INFO : Stage: mount Jan 30 15:44:26.850922 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:26.850922 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:26.852966 ignition[869]: INFO : mount: mount passed Jan 30 15:44:26.853530 ignition[869]: INFO : Ignition finished successfully Jan 30 15:44:26.855322 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:44:27.197631 systemd-networkd[713]: eth0: Gained IPv6LL Jan 30 15:44:33.692830 coreos-metadata[752]: Jan 30 15:44:33.692 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:44:33.734535 coreos-metadata[752]: Jan 30 15:44:33.734 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:44:33.752059 coreos-metadata[752]: Jan 30 15:44:33.751 INFO Fetch successful Jan 30 15:44:33.753756 coreos-metadata[752]: Jan 30 15:44:33.752 INFO wrote hostname ci-4081-3-0-c-370142c247.novalocal to /sysroot/etc/hostname Jan 30 15:44:33.755834 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 15:44:33.756339 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 15:44:33.769477 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:44:33.797637 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:44:33.828322 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (885) Jan 30 15:44:33.838686 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:44:33.838784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:44:33.842832 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:44:33.854389 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:44:33.859244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:44:33.910555 ignition[902]: INFO : Ignition 2.19.0 Jan 30 15:44:33.913956 ignition[902]: INFO : Stage: files Jan 30 15:44:33.915203 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:33.915203 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:33.918455 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:44:33.918455 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:44:33.918455 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:44:33.925035 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:44:33.925856 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:44:33.926667 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:44:33.926382 unknown[902]: wrote ssh authorized keys file for user: core Jan 30 15:44:33.929499 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:44:33.930413 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:44:33.930413 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:44:33.930413 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:44:33.933392 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:44:33.933392 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:44:33.933392 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:44:33.933392 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 15:44:34.372524 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 15:44:36.146580 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:44:36.150681 ignition[902]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:44:36.150681 ignition[902]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:44:36.150681 ignition[902]: INFO : files: files passed Jan 30 15:44:36.150681 ignition[902]: INFO : Ignition finished successfully Jan 30 15:44:36.152416 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:44:36.162557 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:44:36.166845 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:44:36.172780 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:44:36.172957 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:44:36.190079 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:36.191424 initrd-setup-root-after-ignition[931]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:36.192929 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:44:36.196530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:44:36.197455 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:44:36.205604 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:44:36.241550 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:44:36.241769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:44:36.246045 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:44:36.247103 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:44:36.249462 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:44:36.258565 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:44:36.275548 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:44:36.283515 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:44:36.302846 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:36.304590 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:36.307783 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:44:36.310688 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:44:36.310962 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:44:36.314121 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:44:36.316060 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:44:36.318964 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:44:36.321707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:44:36.324362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:44:36.327450 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:44:36.339856 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:44:36.342991 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:44:36.345902 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:44:36.348936 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:44:36.351754 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:44:36.352064 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:44:36.355181 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:36.357201 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:36.359775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:44:36.360050 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:36.362866 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:44:36.363130 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:44:36.367167 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:44:36.367508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:44:36.370009 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:44:36.370316 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:44:36.379815 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:44:36.391858 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:44:36.393193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:44:36.393652 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:36.400530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:44:36.400752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:44:36.408685 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:44:36.409020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:44:36.423118 ignition[955]: INFO : Ignition 2.19.0 Jan 30 15:44:36.425218 ignition[955]: INFO : Stage: umount Jan 30 15:44:36.425218 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:44:36.425218 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:44:36.425218 ignition[955]: INFO : umount: umount passed Jan 30 15:44:36.425218 ignition[955]: INFO : Ignition finished successfully Jan 30 15:44:36.425476 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:44:36.428449 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:44:36.428547 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:44:36.431108 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:44:36.431175 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:44:36.432183 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:44:36.432226 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:44:36.433296 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 15:44:36.433335 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 15:44:36.434404 systemd[1]: Stopped target network.target - Network. Jan 30 15:44:36.435421 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:44:36.435465 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:44:36.436534 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:44:36.437575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:44:36.437614 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:36.438711 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:44:36.439764 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:44:36.440842 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:44:36.440876 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:44:36.442061 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:44:36.442093 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:44:36.443300 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:44:36.443341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:44:36.444388 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:44:36.444429 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:44:36.445722 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:44:36.447152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:44:36.448444 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:44:36.448525 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:44:36.449630 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:44:36.449702 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:44:36.450333 systemd-networkd[713]: eth0: DHCPv6 lease lost Jan 30 15:44:36.452126 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:44:36.452213 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:44:36.453205 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:44:36.453238 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:36.459422 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:44:36.460697 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:44:36.460755 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:44:36.461409 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:36.462142 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:44:36.462748 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:44:36.471153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:44:36.471235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:36.472483 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:44:36.472525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:36.475101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:44:36.475144 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:36.476731 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:44:36.476871 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:36.478343 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:44:36.478440 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:44:36.479953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:44:36.480005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:36.481381 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:44:36.481412 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:36.482413 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:44:36.482455 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:44:36.484117 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:44:36.484156 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:44:36.485207 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:44:36.485247 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:44:36.495399 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:44:36.496082 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:44:36.496143 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:36.497236 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:44:36.497309 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:36.499593 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:44:36.499635 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:36.500887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:44:36.500927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:36.502554 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:44:36.502647 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:44:36.503939 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:44:36.512633 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:44:36.520890 systemd[1]: Switching root. Jan 30 15:44:36.550974 systemd-journald[184]: Journal stopped Jan 30 15:44:38.215436 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 15:44:38.215524 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:44:38.215544 kernel: SELinux: policy capability open_perms=1 Jan 30 15:44:38.215558 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:44:38.215571 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:44:38.215584 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:44:38.215598 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:44:38.215616 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:44:38.215629 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:44:38.215648 kernel: audit: type=1403 audit(1738251877.174:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:44:38.215665 systemd[1]: Successfully loaded SELinux policy in 83.316ms. Jan 30 15:44:38.215689 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.251ms. Jan 30 15:44:38.215705 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:44:38.215720 systemd[1]: Detected virtualization kvm. Jan 30 15:44:38.215737 systemd[1]: Detected architecture x86-64. Jan 30 15:44:38.215751 systemd[1]: Detected first boot. Jan 30 15:44:38.215765 systemd[1]: Hostname set to . Jan 30 15:44:38.215782 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:44:38.215796 zram_generator::config[998]: No configuration found. Jan 30 15:44:38.215813 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:44:38.215828 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 15:44:38.215842 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 15:44:38.215856 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 15:44:38.215872 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:44:38.215914 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:44:38.215932 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:44:38.215946 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:44:38.215961 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:44:38.215980 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:44:38.215995 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:44:38.216010 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:44:38.216026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:44:38.216040 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:44:38.216055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:44:38.216069 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:44:38.216083 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:44:38.216100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:44:38.216115 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 15:44:38.216134 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:44:38.216148 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 15:44:38.216162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 15:44:38.216179 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 15:44:38.216198 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:44:38.216225 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:44:38.216242 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:44:38.217344 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:44:38.217375 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:44:38.217398 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:44:38.217421 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:44:38.217437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:44:38.217452 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:44:38.217466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:44:38.217485 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:44:38.217499 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:44:38.217514 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:44:38.217529 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:44:38.217544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:38.217558 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:44:38.217572 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:44:38.217586 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:44:38.217602 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:44:38.217620 systemd[1]: Reached target machines.target - Containers. Jan 30 15:44:38.217635 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:44:38.217650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:38.217665 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:44:38.217679 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:44:38.217694 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:38.217708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:44:38.217722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:38.217739 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:44:38.217753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:38.217768 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:44:38.217782 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 15:44:38.217798 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 15:44:38.217812 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 15:44:38.217827 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 15:44:38.217841 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:44:38.217855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:44:38.217872 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:44:38.217887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:44:38.217901 kernel: fuse: init (API version 7.39) Jan 30 15:44:38.217915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:44:38.217929 kernel: loop: module loaded Jan 30 15:44:38.217968 systemd-journald[1098]: Collecting audit messages is disabled. Jan 30 15:44:38.217996 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 15:44:38.218014 systemd[1]: Stopped verity-setup.service. Jan 30 15:44:38.218030 systemd-journald[1098]: Journal started Jan 30 15:44:38.218058 systemd-journald[1098]: Runtime Journal (/run/log/journal/98954215d3b34f43b02161915a82e7c2) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:44:37.870680 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:44:37.896552 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 15:44:37.896965 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 15:44:38.225342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:38.231286 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:44:38.245424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:44:38.246104 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:44:38.246957 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:44:38.248823 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:44:38.249446 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:44:38.250144 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:44:38.250965 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:44:38.252759 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:44:38.253583 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:44:38.253725 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:44:38.257603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:38.257771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:38.258604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:38.258737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:38.259571 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:44:38.259701 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:44:38.262565 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:38.262722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:38.263616 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:44:38.264477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:44:38.265317 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:44:38.278456 kernel: ACPI: bus type drm_connector registered Jan 30 15:44:38.281374 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:44:38.281560 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:44:38.282604 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:44:38.289637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:44:38.295615 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:44:38.297902 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:44:38.297943 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:44:38.300537 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:44:38.306405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:44:38.313545 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:44:38.315503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:38.323986 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:44:38.327395 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:44:38.328090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:44:38.335471 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:44:38.336749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:44:38.341415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:44:38.343432 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:44:38.346464 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:44:38.349813 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:44:38.351478 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:44:38.352905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:44:38.367839 systemd-journald[1098]: Time spent on flushing to /var/log/journal/98954215d3b34f43b02161915a82e7c2 is 63.525ms for 929 entries. Jan 30 15:44:38.367839 systemd-journald[1098]: System Journal (/var/log/journal/98954215d3b34f43b02161915a82e7c2) is 8.0M, max 584.8M, 576.8M free. Jan 30 15:44:38.522628 systemd-journald[1098]: Received client request to flush runtime journal. Jan 30 15:44:38.522684 kernel: loop0: detected capacity change from 0 to 8 Jan 30 15:44:38.522709 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:44:38.397363 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:44:38.398527 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:44:38.412443 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:44:38.413482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:44:38.424542 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:44:38.449458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:44:38.463212 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 15:44:38.530539 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 15:44:38.521225 systemd-tmpfiles[1132]: ACLs are not supported, ignoring. Jan 30 15:44:38.521278 systemd-tmpfiles[1132]: ACLs are not supported, ignoring. Jan 30 15:44:38.526327 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:44:38.533662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:44:38.541126 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:44:38.542161 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:44:38.557945 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:44:38.609023 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 15:44:38.616931 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:44:38.630434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:44:38.661376 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 30 15:44:38.661615 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 30 15:44:38.666760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:44:38.677295 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 15:44:38.750109 kernel: loop4: detected capacity change from 0 to 8 Jan 30 15:44:38.755283 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 15:44:38.814359 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 15:44:38.870652 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 15:44:38.925750 (sd-merge)[1160]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 15:44:38.926907 (sd-merge)[1160]: Merged extensions into '/usr'. Jan 30 15:44:38.938828 systemd[1]: Reloading requested from client PID 1131 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:44:38.938860 systemd[1]: Reloading... Jan 30 15:44:39.021598 zram_generator::config[1182]: No configuration found. Jan 30 15:44:39.286049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:44:39.356134 systemd[1]: Reloading finished in 416 ms. Jan 30 15:44:39.382001 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:44:39.385849 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:44:39.386922 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:44:39.390567 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:44:39.397485 systemd[1]: Starting ensure-sysext.service... Jan 30 15:44:39.399161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:44:39.402504 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:44:39.413844 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:44:39.413866 systemd[1]: Reloading... Jan 30 15:44:39.442453 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Jan 30 15:44:39.448997 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:44:39.449421 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:44:39.450421 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:44:39.450747 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 15:44:39.450813 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 15:44:39.456800 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:44:39.456814 systemd-tmpfiles[1244]: Skipping /boot Jan 30 15:44:39.483409 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:44:39.483421 systemd-tmpfiles[1244]: Skipping /boot Jan 30 15:44:39.521423 zram_generator::config[1287]: No configuration found. Jan 30 15:44:39.578445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1269) Jan 30 15:44:39.694793 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 15:44:39.708296 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 15:44:39.728283 kernel: ACPI: button: Power Button [PWRF] Jan 30 15:44:39.770277 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 15:44:39.803284 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:44:39.803455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:44:39.869317 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 15:44:39.869393 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 15:44:39.873790 kernel: Console: switching to colour dummy device 80x25 Jan 30 15:44:39.875550 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 15:44:39.875594 kernel: [drm] features: -context_init Jan 30 15:44:39.877586 kernel: [drm] number of scanouts: 1 Jan 30 15:44:39.877626 kernel: [drm] number of cap sets: 0 Jan 30 15:44:39.879798 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 15:44:39.880185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:44:39.880300 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 15:44:39.882682 systemd[1]: Reloading finished in 468 ms. Jan 30 15:44:39.890473 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 15:44:39.890592 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 15:44:39.897312 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 15:44:39.905767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:44:39.916841 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:44:39.947505 systemd[1]: Finished ensure-sysext.service. Jan 30 15:44:39.964088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:39.968427 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:44:40.007588 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:44:40.008196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:44:40.011076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:44:40.022697 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:44:40.029573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:44:40.038546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:44:40.039998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:44:40.044520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:44:40.047694 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:44:40.060503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:44:40.063473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:44:40.070816 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:44:40.081464 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:44:40.086756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:44:40.086841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:44:40.088857 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:44:40.090038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:44:40.090516 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:44:40.091327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:44:40.091800 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:44:40.095818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:44:40.096074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:44:40.096578 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:44:40.096787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:44:40.108756 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:44:40.109418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:44:40.109479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:44:40.112700 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:44:40.125030 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:44:40.182495 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:44:40.184792 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:44:40.238293 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:44:40.276664 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:44:40.284325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:44:40.297445 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:44:40.307780 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:44:40.340584 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:44:40.354689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:44:40.372504 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:44:40.393644 augenrules[1414]: No rules Jan 30 15:44:40.395646 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:44:40.408517 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:44:40.423630 systemd-networkd[1371]: lo: Link UP Jan 30 15:44:40.424751 systemd-networkd[1371]: lo: Gained carrier Jan 30 15:44:40.428017 systemd-networkd[1371]: Enumeration completed Jan 30 15:44:40.428386 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:44:40.429481 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:40.430225 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:44:40.432497 systemd-networkd[1371]: eth0: Link UP Jan 30 15:44:40.432502 systemd-networkd[1371]: eth0: Gained carrier Jan 30 15:44:40.432519 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:44:40.440891 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:44:40.446320 systemd-networkd[1371]: eth0: DHCPv4 address 172.24.4.74/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:44:40.446471 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:44:40.455137 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:44:40.455985 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:44:40.466393 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:44:40.469581 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:44:40.482677 systemd-resolved[1372]: Positive Trust Anchors: Jan 30 15:44:40.482694 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:44:40.482735 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:44:40.488693 systemd-resolved[1372]: Using system hostname 'ci-4081-3-0-c-370142c247.novalocal'. Jan 30 15:44:40.490465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:44:40.491235 systemd[1]: Reached target network.target - Network. Jan 30 15:44:40.491719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:44:40.492181 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:44:40.495766 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:44:40.497973 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:44:40.500629 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:44:40.501137 systemd-timesyncd[1373]: Contacted time server 188.165.224.178:123 (0.flatcar.pool.ntp.org). Jan 30 15:44:40.501180 systemd-timesyncd[1373]: Initial clock synchronization to Thu 2025-01-30 15:44:40.816113 UTC. Jan 30 15:44:40.505727 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:44:40.509171 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:44:40.512182 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:44:40.512327 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:44:40.513635 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:44:40.516472 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:44:40.520590 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:44:40.529064 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:44:40.534194 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:44:40.536009 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:44:40.537939 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:44:40.540295 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:44:40.540409 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:44:40.546389 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:44:40.552190 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:44:40.557880 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:44:40.573658 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:44:40.583736 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:44:40.588323 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:44:40.591460 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:44:40.593488 jq[1434]: false Jan 30 15:44:40.600426 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:44:40.604833 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:44:40.619476 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:44:40.624707 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 15:44:40.625846 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:44:40.629390 extend-filesystems[1435]: Found loop4 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found loop5 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found loop6 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found loop7 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda1 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda2 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda3 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found usr Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda4 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda6 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda7 Jan 30 15:44:40.629390 extend-filesystems[1435]: Found vda9 Jan 30 15:44:40.629390 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 30 15:44:40.795910 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 15:44:40.795950 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 15:44:40.795970 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1273) Jan 30 15:44:40.646610 dbus-daemon[1431]: [system] SELinux support is enabled Jan 30 15:44:40.633490 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:44:40.800764 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 30 15:44:40.654432 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:44:40.816751 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:44:40.816751 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 15:44:40.816751 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 15:44:40.816751 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 15:44:40.841242 update_engine[1443]: I20250130 15:44:40.659686 1443 main.cc:92] Flatcar Update Engine starting Jan 30 15:44:40.841242 update_engine[1443]: I20250130 15:44:40.662496 1443 update_check_scheduler.cc:74] Next update check in 2m23s Jan 30 15:44:40.663691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:44:40.851592 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 30 15:44:40.866142 jq[1447]: true Jan 30 15:44:40.678203 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:44:40.678417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:44:40.678692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:44:40.866793 jq[1457]: true Jan 30 15:44:40.678821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:44:40.712613 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:44:40.712821 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:44:40.730033 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:44:40.730061 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:44:40.733135 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:44:40.733154 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:44:40.735451 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:44:40.756591 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:44:40.756803 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:44:40.800283 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:44:40.800483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:44:40.813019 systemd-logind[1440]: New seat seat0. Jan 30 15:44:40.834451 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 15:44:40.834469 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 15:44:40.845464 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:44:40.931041 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:44:40.932027 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:44:40.949501 systemd[1]: Starting sshkeys.service... Jan 30 15:44:40.970689 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:44:40.978442 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:44:40.983444 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:44:41.044232 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:44:41.126847 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:44:41.153483 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:44:41.167193 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:44:41.172743 systemd[1]: Started sshd@0-172.24.4.74:22-172.24.4.1:35298.service - OpenSSH per-connection server daemon (172.24.4.1:35298). Jan 30 15:44:41.184096 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:44:41.184483 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:44:41.194780 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:44:41.203375 containerd[1460]: time="2025-01-30T15:44:41.202199453Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 15:44:41.216948 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:44:41.231704 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:44:41.242143 containerd[1460]: time="2025-01-30T15:44:41.242082555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:41.243585 containerd[1460]: time="2025-01-30T15:44:41.243554065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:41.243653 containerd[1460]: time="2025-01-30T15:44:41.243638933Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:44:41.243732 containerd[1460]: time="2025-01-30T15:44:41.243717606Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:44:41.243965 containerd[1460]: time="2025-01-30T15:44:41.243945846Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:44:41.244032 containerd[1460]: time="2025-01-30T15:44:41.244016250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244153 containerd[1460]: time="2025-01-30T15:44:41.244133078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244212 containerd[1460]: time="2025-01-30T15:44:41.244198651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244496 containerd[1460]: time="2025-01-30T15:44:41.244475094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244996 containerd[1460]: time="2025-01-30T15:44:41.244613581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244996 containerd[1460]: time="2025-01-30T15:44:41.244646914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244996 containerd[1460]: time="2025-01-30T15:44:41.244660483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244996 containerd[1460]: time="2025-01-30T15:44:41.244752079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244996 containerd[1460]: time="2025-01-30T15:44:41.244961803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:44:41.244719 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 15:44:41.245642 containerd[1460]: time="2025-01-30T15:44:41.245618281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:44:41.245783 containerd[1460]: time="2025-01-30T15:44:41.245765016Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:44:41.245936 containerd[1460]: time="2025-01-30T15:44:41.245916270Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:44:41.246059 containerd[1460]: time="2025-01-30T15:44:41.246040688Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:44:41.247749 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:44:41.259313 containerd[1460]: time="2025-01-30T15:44:41.259232033Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:44:41.259313 containerd[1460]: time="2025-01-30T15:44:41.259319713Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:44:41.259465 containerd[1460]: time="2025-01-30T15:44:41.259347464Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:44:41.259465 containerd[1460]: time="2025-01-30T15:44:41.259412611Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:44:41.259465 containerd[1460]: time="2025-01-30T15:44:41.259435499Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:44:41.259668 containerd[1460]: time="2025-01-30T15:44:41.259625459Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:44:41.260100 containerd[1460]: time="2025-01-30T15:44:41.260057062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:44:41.260263 containerd[1460]: time="2025-01-30T15:44:41.260216022Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:44:41.260263 containerd[1460]: time="2025-01-30T15:44:41.260242180Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:44:41.260313 containerd[1460]: time="2025-01-30T15:44:41.260283447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:44:41.260313 containerd[1460]: time="2025-01-30T15:44:41.260306783Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260365 containerd[1460]: time="2025-01-30T15:44:41.260322643Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260365 containerd[1460]: time="2025-01-30T15:44:41.260337514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260412 containerd[1460]: time="2025-01-30T15:44:41.260372055Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260412 containerd[1460]: time="2025-01-30T15:44:41.260397515Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260464 containerd[1460]: time="2025-01-30T15:44:41.260413135Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260464 containerd[1460]: time="2025-01-30T15:44:41.260428849Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260509 containerd[1460]: time="2025-01-30T15:44:41.260463286Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:44:41.260509 containerd[1460]: time="2025-01-30T15:44:41.260489382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260552 containerd[1460]: time="2025-01-30T15:44:41.260505803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260552 containerd[1460]: time="2025-01-30T15:44:41.260524704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260608 containerd[1460]: time="2025-01-30T15:44:41.260552882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260608 containerd[1460]: time="2025-01-30T15:44:41.260569918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260608 containerd[1460]: time="2025-01-30T15:44:41.260590694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260608 containerd[1460]: time="2025-01-30T15:44:41.260606189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260706 containerd[1460]: time="2025-01-30T15:44:41.260623089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260706 containerd[1460]: time="2025-01-30T15:44:41.260638657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260706 containerd[1460]: time="2025-01-30T15:44:41.260656069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260706 containerd[1460]: time="2025-01-30T15:44:41.260677436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260706 containerd[1460]: time="2025-01-30T15:44:41.260693025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260819 containerd[1460]: time="2025-01-30T15:44:41.260707677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260819 containerd[1460]: time="2025-01-30T15:44:41.260727119Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:44:41.260819 containerd[1460]: time="2025-01-30T15:44:41.260750121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260819 containerd[1460]: time="2025-01-30T15:44:41.260764836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260819 containerd[1460]: time="2025-01-30T15:44:41.260778258Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:44:41.260928 containerd[1460]: time="2025-01-30T15:44:41.260826837Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:44:41.260928 containerd[1460]: time="2025-01-30T15:44:41.260848767Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:44:41.260928 containerd[1460]: time="2025-01-30T15:44:41.260862440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:44:41.260928 containerd[1460]: time="2025-01-30T15:44:41.260877612Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:44:41.260928 containerd[1460]: time="2025-01-30T15:44:41.260890941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.260928 containerd[1460]: time="2025-01-30T15:44:41.260909040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:44:41.260928 containerd[1460]: time="2025-01-30T15:44:41.260921567Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:44:41.261086 containerd[1460]: time="2025-01-30T15:44:41.260933979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:44:41.261456 containerd[1460]: time="2025-01-30T15:44:41.261366258Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:44:41.261612 containerd[1460]: time="2025-01-30T15:44:41.261454969Z" level=info msg="Connect containerd service" Jan 30 15:44:41.261612 containerd[1460]: time="2025-01-30T15:44:41.261492396Z" level=info msg="using legacy CRI server" Jan 30 15:44:41.261612 containerd[1460]: time="2025-01-30T15:44:41.261500976Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:44:41.261612 containerd[1460]: time="2025-01-30T15:44:41.261594946Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:44:41.262397 containerd[1460]: time="2025-01-30T15:44:41.262358036Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:44:41.262586 containerd[1460]: time="2025-01-30T15:44:41.262535218Z" level=info msg="Start subscribing containerd event" Jan 30 15:44:41.262624 containerd[1460]: time="2025-01-30T15:44:41.262591190Z" level=info msg="Start recovering state" Jan 30 15:44:41.262670 containerd[1460]: time="2025-01-30T15:44:41.262650807Z" level=info msg="Start event monitor" Jan 30 15:44:41.262698 containerd[1460]: time="2025-01-30T15:44:41.262669155Z" level=info msg="Start snapshots syncer" Jan 30 15:44:41.262698 containerd[1460]: time="2025-01-30T15:44:41.262680027Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:44:41.262698 containerd[1460]: time="2025-01-30T15:44:41.262689003Z" level=info msg="Start streaming server" Jan 30 15:44:41.264537 containerd[1460]: time="2025-01-30T15:44:41.264497781Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:44:41.266239 containerd[1460]: time="2025-01-30T15:44:41.264638975Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:44:41.266239 containerd[1460]: time="2025-01-30T15:44:41.264750783Z" level=info msg="containerd successfully booted in 0.071066s" Jan 30 15:44:41.264825 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:44:42.106205 sshd[1509]: Accepted publickey for core from 172.24.4.1 port 35298 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:44:42.111501 sshd[1509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:42.139834 systemd-logind[1440]: New session 1 of user core. Jan 30 15:44:42.143559 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:44:42.158910 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:44:42.173816 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 30 15:44:42.180478 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:44:42.197358 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:44:42.210327 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:44:42.222875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:44:42.240034 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:44:42.258030 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:44:42.290098 (systemd)[1528]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:44:42.337822 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:44:42.432496 systemd[1528]: Queued start job for default target default.target. Jan 30 15:44:42.442182 systemd[1528]: Created slice app.slice - User Application Slice. Jan 30 15:44:42.442205 systemd[1528]: Reached target paths.target - Paths. Jan 30 15:44:42.442219 systemd[1528]: Reached target timers.target - Timers. Jan 30 15:44:42.444137 systemd[1528]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:44:42.484605 systemd[1528]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:44:42.484814 systemd[1528]: Reached target sockets.target - Sockets. Jan 30 15:44:42.484845 systemd[1528]: Reached target basic.target - Basic System. Jan 30 15:44:42.484915 systemd[1528]: Reached target default.target - Main User Target. Jan 30 15:44:42.484963 systemd[1528]: Startup finished in 175ms. Jan 30 15:44:42.485102 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:44:42.493735 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:44:42.996770 systemd[1]: Started sshd@1-172.24.4.74:22-172.24.4.1:35304.service - OpenSSH per-connection server daemon (172.24.4.1:35304). Jan 30 15:44:44.119858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:44:44.134211 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:44:45.188831 sshd[1547]: Accepted publickey for core from 172.24.4.1 port 35304 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:44:45.190022 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:45.204241 systemd-logind[1440]: New session 2 of user core. Jan 30 15:44:45.210867 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:44:45.704364 kubelet[1555]: E0130 15:44:45.704248 1555 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:44:45.708908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:44:45.709082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:44:45.709421 systemd[1]: kubelet.service: Consumed 2.107s CPU time. Jan 30 15:44:45.837651 sshd[1547]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:45.848671 systemd[1]: sshd@1-172.24.4.74:22-172.24.4.1:35304.service: Deactivated successfully. Jan 30 15:44:45.852020 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 15:44:45.854030 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jan 30 15:44:45.861055 systemd[1]: Started sshd@2-172.24.4.74:22-172.24.4.1:40860.service - OpenSSH per-connection server daemon (172.24.4.1:40860). Jan 30 15:44:45.875491 systemd-logind[1440]: Removed session 2. Jan 30 15:44:46.293125 login[1517]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:44:46.301780 login[1519]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:44:46.306048 systemd-logind[1440]: New session 3 of user core. Jan 30 15:44:46.316714 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:44:46.324337 systemd-logind[1440]: New session 4 of user core. Jan 30 15:44:46.329767 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:44:47.187996 sshd[1570]: Accepted publickey for core from 172.24.4.1 port 40860 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:44:47.190544 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:47.199127 systemd-logind[1440]: New session 5 of user core. Jan 30 15:44:47.211806 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:44:47.643619 coreos-metadata[1430]: Jan 30 15:44:47.643 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:44:47.710605 coreos-metadata[1430]: Jan 30 15:44:47.710 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 15:44:47.908874 sshd[1570]: pam_unix(sshd:session): session closed for user core Jan 30 15:44:47.917500 systemd[1]: sshd@2-172.24.4.74:22-172.24.4.1:40860.service: Deactivated successfully. Jan 30 15:44:47.921258 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 15:44:47.923530 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 30 15:44:47.925927 systemd-logind[1440]: Removed session 5. Jan 30 15:44:48.072696 coreos-metadata[1492]: Jan 30 15:44:48.072 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:44:48.113640 coreos-metadata[1492]: Jan 30 15:44:48.113 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 15:44:48.204389 coreos-metadata[1430]: Jan 30 15:44:48.204 INFO Fetch successful Jan 30 15:44:48.204757 coreos-metadata[1430]: Jan 30 15:44:48.204 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:44:48.213712 coreos-metadata[1430]: Jan 30 15:44:48.213 INFO Fetch successful Jan 30 15:44:48.213712 coreos-metadata[1430]: Jan 30 15:44:48.213 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 15:44:48.224332 coreos-metadata[1430]: Jan 30 15:44:48.224 INFO Fetch successful Jan 30 15:44:48.224332 coreos-metadata[1430]: Jan 30 15:44:48.224 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 15:44:48.235738 coreos-metadata[1430]: Jan 30 15:44:48.235 INFO Fetch successful Jan 30 15:44:48.235738 coreos-metadata[1430]: Jan 30 15:44:48.235 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 15:44:48.244129 coreos-metadata[1430]: Jan 30 15:44:48.244 INFO Fetch successful Jan 30 15:44:48.244129 coreos-metadata[1430]: Jan 30 15:44:48.244 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 15:44:48.257247 coreos-metadata[1430]: Jan 30 15:44:48.257 INFO Fetch successful Jan 30 15:44:48.285083 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:44:48.288633 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 15:44:48.365323 coreos-metadata[1492]: Jan 30 15:44:48.365 INFO Fetch successful Jan 30 15:44:48.365323 coreos-metadata[1492]: Jan 30 15:44:48.365 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 15:44:48.380415 coreos-metadata[1492]: Jan 30 15:44:48.380 INFO Fetch successful Jan 30 15:44:48.385515 unknown[1492]: wrote ssh authorized keys file for user: core Jan 30 15:44:48.424396 update-ssh-keys[1611]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:44:48.426345 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:44:48.428293 systemd[1]: Finished sshkeys.service. Jan 30 15:44:48.431020 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:44:48.432463 systemd[1]: Startup finished in 1.295s (kernel) + 14.328s (initrd) + 11.339s (userspace) = 26.963s. Jan 30 15:44:55.866003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:44:55.876765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:44:56.088605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:44:56.088616 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:44:56.372564 kubelet[1623]: E0130 15:44:56.372335 1623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:44:56.381133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:44:56.381541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:44:58.030807 systemd[1]: Started sshd@3-172.24.4.74:22-172.24.4.1:47926.service - OpenSSH per-connection server daemon (172.24.4.1:47926). Jan 30 15:44:59.330042 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 47926 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:44:59.333470 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:44:59.344333 systemd-logind[1440]: New session 6 of user core. Jan 30 15:44:59.347569 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:45:00.028212 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:00.042620 systemd[1]: sshd@3-172.24.4.74:22-172.24.4.1:47926.service: Deactivated successfully. Jan 30 15:45:00.046698 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:45:00.049019 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:45:00.056899 systemd[1]: Started sshd@4-172.24.4.74:22-172.24.4.1:47928.service - OpenSSH per-connection server daemon (172.24.4.1:47928). Jan 30 15:45:00.060399 systemd-logind[1440]: Removed session 6. Jan 30 15:45:01.594564 sshd[1640]: Accepted publickey for core from 172.24.4.1 port 47928 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:01.597627 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:01.608675 systemd-logind[1440]: New session 7 of user core. Jan 30 15:45:01.616624 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:45:02.097845 sshd[1640]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:02.110501 systemd[1]: sshd@4-172.24.4.74:22-172.24.4.1:47928.service: Deactivated successfully. Jan 30 15:45:02.113624 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:45:02.118576 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:45:02.124965 systemd[1]: Started sshd@5-172.24.4.74:22-172.24.4.1:47936.service - OpenSSH per-connection server daemon (172.24.4.1:47936). Jan 30 15:45:02.128043 systemd-logind[1440]: Removed session 7. Jan 30 15:45:03.490615 sshd[1647]: Accepted publickey for core from 172.24.4.1 port 47936 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:03.493537 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:03.503978 systemd-logind[1440]: New session 8 of user core. Jan 30 15:45:03.514682 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:45:04.264056 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:04.273821 systemd[1]: sshd@5-172.24.4.74:22-172.24.4.1:47936.service: Deactivated successfully. Jan 30 15:45:04.276963 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:45:04.278764 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:45:04.289934 systemd[1]: Started sshd@6-172.24.4.74:22-172.24.4.1:49886.service - OpenSSH per-connection server daemon (172.24.4.1:49886). Jan 30 15:45:04.294337 systemd-logind[1440]: Removed session 8. Jan 30 15:45:05.800036 sshd[1654]: Accepted publickey for core from 172.24.4.1 port 49886 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:05.802886 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:05.813872 systemd-logind[1440]: New session 9 of user core. Jan 30 15:45:05.820564 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:45:06.289135 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 15:45:06.290769 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:06.312620 sudo[1657]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:06.572726 sshd[1654]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:06.588885 systemd[1]: sshd@6-172.24.4.74:22-172.24.4.1:49886.service: Deactivated successfully. Jan 30 15:45:06.592399 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:45:06.594640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:45:06.597602 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:45:06.603753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:06.616148 systemd[1]: Started sshd@7-172.24.4.74:22-172.24.4.1:49890.service - OpenSSH per-connection server daemon (172.24.4.1:49890). Jan 30 15:45:06.628525 systemd-logind[1440]: Removed session 9. Jan 30 15:45:06.948916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:06.965796 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:45:07.112642 kubelet[1672]: E0130 15:45:07.112531 1672 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:45:07.116775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:45:07.117074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:45:08.098241 sshd[1663]: Accepted publickey for core from 172.24.4.1 port 49890 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:08.101682 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:08.113774 systemd-logind[1440]: New session 10 of user core. Jan 30 15:45:08.125573 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:45:08.595719 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 15:45:08.596508 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:08.604419 sudo[1682]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:08.615654 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 15:45:08.616952 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:08.656872 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 15:45:08.658969 auditctl[1685]: No rules Jan 30 15:45:08.661678 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:45:08.662139 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 15:45:08.671996 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:45:08.739332 augenrules[1703]: No rules Jan 30 15:45:08.740815 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:45:08.743583 sudo[1681]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:09.007400 sshd[1663]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:09.018934 systemd[1]: sshd@7-172.24.4.74:22-172.24.4.1:49890.service: Deactivated successfully. Jan 30 15:45:09.022081 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:45:09.024021 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:45:09.031840 systemd[1]: Started sshd@8-172.24.4.74:22-172.24.4.1:49892.service - OpenSSH per-connection server daemon (172.24.4.1:49892). Jan 30 15:45:09.035503 systemd-logind[1440]: Removed session 10. Jan 30 15:45:10.552417 sshd[1711]: Accepted publickey for core from 172.24.4.1 port 49892 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:10.555612 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:10.565378 systemd-logind[1440]: New session 11 of user core. Jan 30 15:45:10.574559 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:45:11.020097 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:45:11.020891 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:45:12.808311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:12.821451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:12.867446 systemd[1]: Reloading requested from client PID 1749 ('systemctl') (unit session-11.scope)... Jan 30 15:45:12.867470 systemd[1]: Reloading... Jan 30 15:45:12.984200 zram_generator::config[1789]: No configuration found. Jan 30 15:45:13.129485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:45:13.214001 systemd[1]: Reloading finished in 345 ms. Jan 30 15:45:13.262735 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 15:45:13.262860 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 15:45:13.263136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:13.267548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:45:13.378424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:45:13.388542 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:45:13.466293 kubelet[1855]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:13.466293 kubelet[1855]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:45:13.466293 kubelet[1855]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:45:13.466293 kubelet[1855]: I0130 15:45:13.465506 1855 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:45:13.790196 kubelet[1855]: I0130 15:45:13.789836 1855 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 15:45:13.790365 kubelet[1855]: I0130 15:45:13.790351 1855 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:45:13.790889 kubelet[1855]: I0130 15:45:13.790875 1855 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 15:45:13.807396 kubelet[1855]: I0130 15:45:13.807359 1855 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:45:13.832175 kubelet[1855]: I0130 15:45:13.832068 1855 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:45:13.832741 kubelet[1855]: I0130 15:45:13.832668 1855 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:45:13.833157 kubelet[1855]: I0130 15:45:13.832739 1855 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 15:45:13.834814 kubelet[1855]: I0130 15:45:13.834762 1855 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:45:13.834814 kubelet[1855]: I0130 15:45:13.834815 1855 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 15:45:13.835091 kubelet[1855]: I0130 15:45:13.835048 1855 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:13.838098 kubelet[1855]: I0130 15:45:13.837478 1855 kubelet.go:400] "Attempting to sync node with API server" Jan 30 15:45:13.838196 kubelet[1855]: I0130 15:45:13.838098 1855 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:45:13.838225 kubelet[1855]: I0130 15:45:13.838194 1855 kubelet.go:312] "Adding apiserver pod source" Jan 30 15:45:13.838283 kubelet[1855]: I0130 15:45:13.838232 1855 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:45:13.839029 kubelet[1855]: E0130 15:45:13.838684 1855 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:13.839029 kubelet[1855]: E0130 15:45:13.838740 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:13.845448 kubelet[1855]: I0130 15:45:13.845399 1855 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:45:13.849061 kubelet[1855]: I0130 15:45:13.849018 1855 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:45:13.849150 kubelet[1855]: W0130 15:45:13.849115 1855 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:45:13.850913 kubelet[1855]: I0130 15:45:13.850600 1855 server.go:1264] "Started kubelet" Jan 30 15:45:13.851179 kubelet[1855]: I0130 15:45:13.851031 1855 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:45:13.853382 kubelet[1855]: I0130 15:45:13.852104 1855 server.go:455] "Adding debug handlers to kubelet server" Jan 30 15:45:13.854940 kubelet[1855]: I0130 15:45:13.854777 1855 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:45:13.863785 kubelet[1855]: I0130 15:45:13.863648 1855 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:45:13.864206 kubelet[1855]: I0130 15:45:13.864124 1855 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:45:13.867481 kubelet[1855]: W0130 15:45:13.866862 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 15:45:13.867481 kubelet[1855]: E0130 15:45:13.866948 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 15:45:13.867481 kubelet[1855]: W0130 15:45:13.867151 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.74" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 15:45:13.867481 kubelet[1855]: E0130 15:45:13.867191 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.74" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 15:45:13.868509 kubelet[1855]: I0130 15:45:13.868476 1855 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 15:45:13.869391 kubelet[1855]: I0130 15:45:13.869330 1855 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:45:13.869508 kubelet[1855]: I0130 15:45:13.869483 1855 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:45:13.880608 kubelet[1855]: I0130 15:45:13.879615 1855 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:45:13.880608 kubelet[1855]: I0130 15:45:13.879881 1855 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:45:13.884812 kubelet[1855]: E0130 15:45:13.883901 1855 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:45:13.887320 kubelet[1855]: I0130 15:45:13.887211 1855 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:45:13.914087 kubelet[1855]: E0130 15:45:13.914034 1855 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.74\" not found" node="172.24.4.74" Jan 30 15:45:13.918673 kubelet[1855]: I0130 15:45:13.918649 1855 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:45:13.918972 kubelet[1855]: I0130 15:45:13.918823 1855 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:45:13.918972 kubelet[1855]: I0130 15:45:13.918844 1855 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:45:13.928326 kubelet[1855]: I0130 15:45:13.927827 1855 policy_none.go:49] "None policy: Start" Jan 30 15:45:13.929040 kubelet[1855]: I0130 15:45:13.929004 1855 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:45:13.929040 kubelet[1855]: I0130 15:45:13.929029 1855 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:45:13.938840 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 15:45:13.951302 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 15:45:13.955400 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 15:45:13.964469 kubelet[1855]: I0130 15:45:13.963608 1855 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:45:13.964469 kubelet[1855]: I0130 15:45:13.963811 1855 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:45:13.964469 kubelet[1855]: I0130 15:45:13.963931 1855 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:45:13.965001 kubelet[1855]: I0130 15:45:13.964969 1855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:45:13.968716 kubelet[1855]: I0130 15:45:13.968678 1855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:45:13.968716 kubelet[1855]: I0130 15:45:13.968711 1855 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:45:13.968822 kubelet[1855]: I0130 15:45:13.968729 1855 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 15:45:13.968822 kubelet[1855]: E0130 15:45:13.968773 1855 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 15:45:13.971027 kubelet[1855]: E0130 15:45:13.970968 1855 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.74\" not found" Jan 30 15:45:13.971178 kubelet[1855]: I0130 15:45:13.971164 1855 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.74" Jan 30 15:45:13.980803 kubelet[1855]: I0130 15:45:13.980782 1855 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.74" Jan 30 15:45:14.024413 kubelet[1855]: E0130 15:45:14.024339 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.125572 kubelet[1855]: E0130 15:45:14.125518 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.226768 kubelet[1855]: E0130 15:45:14.226705 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.327956 kubelet[1855]: E0130 15:45:14.327879 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.417582 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 30 15:45:14.428107 kubelet[1855]: E0130 15:45:14.428025 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.528928 kubelet[1855]: E0130 15:45:14.528788 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.590775 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:14.596786 systemd[1]: sshd@8-172.24.4.74:22-172.24.4.1:49892.service: Deactivated successfully. Jan 30 15:45:14.602062 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:45:14.602939 systemd[1]: session-11.scope: Consumed 1.115s CPU time, 110.3M memory peak, 0B memory swap peak. Jan 30 15:45:14.606685 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:45:14.609366 systemd-logind[1440]: Removed session 11. Jan 30 15:45:14.629871 kubelet[1855]: E0130 15:45:14.629783 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.730888 kubelet[1855]: E0130 15:45:14.730723 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.792646 kubelet[1855]: I0130 15:45:14.792565 1855 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 15:45:14.792968 kubelet[1855]: W0130 15:45:14.792856 1855 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 15:45:14.792968 kubelet[1855]: W0130 15:45:14.792914 1855 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 15:45:14.792968 kubelet[1855]: W0130 15:45:14.792934 1855 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 15:45:14.831414 kubelet[1855]: E0130 15:45:14.831330 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:14.839717 kubelet[1855]: E0130 15:45:14.839666 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:14.932053 kubelet[1855]: E0130 15:45:14.931974 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:15.032886 kubelet[1855]: E0130 15:45:15.032625 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:15.133834 kubelet[1855]: E0130 15:45:15.133655 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.74\" not found" Jan 30 15:45:15.235894 kubelet[1855]: I0130 15:45:15.235813 1855 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 15:45:15.236714 containerd[1460]: time="2025-01-30T15:45:15.236541137Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:45:15.238396 kubelet[1855]: I0130 15:45:15.237024 1855 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 15:45:15.840827 kubelet[1855]: E0130 15:45:15.840692 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:15.840827 kubelet[1855]: I0130 15:45:15.840825 1855 apiserver.go:52] "Watching apiserver" Jan 30 15:45:15.851950 kubelet[1855]: I0130 15:45:15.851550 1855 topology_manager.go:215] "Topology Admit Handler" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" podNamespace="kube-system" podName="cilium-m2xgh" Jan 30 15:45:15.851950 kubelet[1855]: I0130 15:45:15.851896 1855 topology_manager.go:215] "Topology Admit Handler" podUID="1948b203-6ab9-4255-9d35-bd632bcfe76a" podNamespace="kube-system" podName="kube-proxy-r62qc" Jan 30 15:45:15.869374 systemd[1]: Created slice kubepods-burstable-pod2006ec0f_f993_4556_96ae_a863921f36b0.slice - libcontainer container kubepods-burstable-pod2006ec0f_f993_4556_96ae_a863921f36b0.slice. Jan 30 15:45:15.871710 kubelet[1855]: I0130 15:45:15.871645 1855 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:45:15.881430 kubelet[1855]: I0130 15:45:15.881352 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1948b203-6ab9-4255-9d35-bd632bcfe76a-kube-proxy\") pod \"kube-proxy-r62qc\" (UID: \"1948b203-6ab9-4255-9d35-bd632bcfe76a\") " pod="kube-system/kube-proxy-r62qc" Jan 30 15:45:15.881430 kubelet[1855]: I0130 15:45:15.881442 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1948b203-6ab9-4255-9d35-bd632bcfe76a-lib-modules\") pod \"kube-proxy-r62qc\" (UID: \"1948b203-6ab9-4255-9d35-bd632bcfe76a\") " pod="kube-system/kube-proxy-r62qc" Jan 30 15:45:15.881430 kubelet[1855]: I0130 15:45:15.881495 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cni-path\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.881430 kubelet[1855]: I0130 15:45:15.881558 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-xtables-lock\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.881430 kubelet[1855]: I0130 15:45:15.881609 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2006ec0f-f993-4556-96ae-a863921f36b0-clustermesh-secrets\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882020 kubelet[1855]: I0130 15:45:15.881684 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-net\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882020 kubelet[1855]: I0130 15:45:15.881732 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-kernel\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882020 kubelet[1855]: I0130 15:45:15.881776 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-hubble-tls\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882020 kubelet[1855]: I0130 15:45:15.881824 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbjz\" (UniqueName: \"kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-kube-api-access-9wbjz\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882020 kubelet[1855]: I0130 15:45:15.881869 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-hostproc\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882020 kubelet[1855]: I0130 15:45:15.881910 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-lib-modules\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882465 kubelet[1855]: I0130 15:45:15.881951 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-cgroup\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882465 kubelet[1855]: I0130 15:45:15.882007 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-config-path\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882465 kubelet[1855]: I0130 15:45:15.882057 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-etc-cni-netd\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882465 kubelet[1855]: I0130 15:45:15.882114 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1948b203-6ab9-4255-9d35-bd632bcfe76a-xtables-lock\") pod \"kube-proxy-r62qc\" (UID: \"1948b203-6ab9-4255-9d35-bd632bcfe76a\") " pod="kube-system/kube-proxy-r62qc" Jan 30 15:45:15.882465 kubelet[1855]: I0130 15:45:15.882160 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7zw\" (UniqueName: \"kubernetes.io/projected/1948b203-6ab9-4255-9d35-bd632bcfe76a-kube-api-access-7j7zw\") pod \"kube-proxy-r62qc\" (UID: \"1948b203-6ab9-4255-9d35-bd632bcfe76a\") " pod="kube-system/kube-proxy-r62qc" Jan 30 15:45:15.882799 kubelet[1855]: I0130 15:45:15.882203 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-run\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.882799 kubelet[1855]: I0130 15:45:15.882247 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-bpf-maps\") pod \"cilium-m2xgh\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " pod="kube-system/cilium-m2xgh" Jan 30 15:45:15.895721 systemd[1]: Created slice kubepods-besteffort-pod1948b203_6ab9_4255_9d35_bd632bcfe76a.slice - libcontainer container kubepods-besteffort-pod1948b203_6ab9_4255_9d35_bd632bcfe76a.slice. Jan 30 15:45:16.190637 containerd[1460]: time="2025-01-30T15:45:16.190560926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m2xgh,Uid:2006ec0f-f993-4556-96ae-a863921f36b0,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:16.216645 containerd[1460]: time="2025-01-30T15:45:16.216540619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r62qc,Uid:1948b203-6ab9-4255-9d35-bd632bcfe76a,Namespace:kube-system,Attempt:0,}" Jan 30 15:45:16.841394 kubelet[1855]: E0130 15:45:16.841302 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:16.997988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933347683.mount: Deactivated successfully. Jan 30 15:45:17.002951 containerd[1460]: time="2025-01-30T15:45:17.002764338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:17.006324 containerd[1460]: time="2025-01-30T15:45:17.006193587Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:17.008386 containerd[1460]: time="2025-01-30T15:45:17.008328655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 15:45:17.010459 containerd[1460]: time="2025-01-30T15:45:17.010365955Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:17.011222 containerd[1460]: time="2025-01-30T15:45:17.011082128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:45:17.015470 containerd[1460]: time="2025-01-30T15:45:17.015363481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:45:17.019687 containerd[1460]: time="2025-01-30T15:45:17.019347520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 828.320464ms" Jan 30 15:45:17.024922 containerd[1460]: time="2025-01-30T15:45:17.024848382Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 808.145495ms" Jan 30 15:45:17.257975 containerd[1460]: time="2025-01-30T15:45:17.257297448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:17.257975 containerd[1460]: time="2025-01-30T15:45:17.257368150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:17.257975 containerd[1460]: time="2025-01-30T15:45:17.257384640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:17.257975 containerd[1460]: time="2025-01-30T15:45:17.257472645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:17.274215 containerd[1460]: time="2025-01-30T15:45:17.274019558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:17.274215 containerd[1460]: time="2025-01-30T15:45:17.274151409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:17.277803 containerd[1460]: time="2025-01-30T15:45:17.275548311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:17.277803 containerd[1460]: time="2025-01-30T15:45:17.275764346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:17.361434 systemd[1]: Started cri-containerd-2928ee058de5c70d2703fac5f932a736ee1eb1f1be4e1238d35541d8b38c8183.scope - libcontainer container 2928ee058de5c70d2703fac5f932a736ee1eb1f1be4e1238d35541d8b38c8183. Jan 30 15:45:17.364101 systemd[1]: Started cri-containerd-786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e.scope - libcontainer container 786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e. Jan 30 15:45:17.400072 containerd[1460]: time="2025-01-30T15:45:17.400031289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r62qc,Uid:1948b203-6ab9-4255-9d35-bd632bcfe76a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2928ee058de5c70d2703fac5f932a736ee1eb1f1be4e1238d35541d8b38c8183\"" Jan 30 15:45:17.403771 containerd[1460]: time="2025-01-30T15:45:17.403458903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 15:45:17.405606 containerd[1460]: time="2025-01-30T15:45:17.405496834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m2xgh,Uid:2006ec0f-f993-4556-96ae-a863921f36b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\"" Jan 30 15:45:17.842000 kubelet[1855]: E0130 15:45:17.841926 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:18.763286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180941204.mount: Deactivated successfully. Jan 30 15:45:18.842848 kubelet[1855]: E0130 15:45:18.842768 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:19.306268 containerd[1460]: time="2025-01-30T15:45:19.306181554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:19.307300 containerd[1460]: time="2025-01-30T15:45:19.307162375Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 15:45:19.308699 containerd[1460]: time="2025-01-30T15:45:19.308545752Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:19.311217 containerd[1460]: time="2025-01-30T15:45:19.311142263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:19.312318 containerd[1460]: time="2025-01-30T15:45:19.312046962Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.908548393s" Jan 30 15:45:19.312318 containerd[1460]: time="2025-01-30T15:45:19.312107858Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 15:45:19.314577 containerd[1460]: time="2025-01-30T15:45:19.314538814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 15:45:19.315915 containerd[1460]: time="2025-01-30T15:45:19.315856645Z" level=info msg="CreateContainer within sandbox \"2928ee058de5c70d2703fac5f932a736ee1eb1f1be4e1238d35541d8b38c8183\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:45:19.338152 containerd[1460]: time="2025-01-30T15:45:19.337960892Z" level=info msg="CreateContainer within sandbox \"2928ee058de5c70d2703fac5f932a736ee1eb1f1be4e1238d35541d8b38c8183\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70aff660066e765a341d39af6ced39129ddc1e437a088dc7b825a96d98fb90f2\"" Jan 30 15:45:19.339078 containerd[1460]: time="2025-01-30T15:45:19.338804506Z" level=info msg="StartContainer for \"70aff660066e765a341d39af6ced39129ddc1e437a088dc7b825a96d98fb90f2\"" Jan 30 15:45:19.367483 systemd[1]: run-containerd-runc-k8s.io-70aff660066e765a341d39af6ced39129ddc1e437a088dc7b825a96d98fb90f2-runc.wYks3m.mount: Deactivated successfully. Jan 30 15:45:19.374448 systemd[1]: Started cri-containerd-70aff660066e765a341d39af6ced39129ddc1e437a088dc7b825a96d98fb90f2.scope - libcontainer container 70aff660066e765a341d39af6ced39129ddc1e437a088dc7b825a96d98fb90f2. Jan 30 15:45:19.407476 containerd[1460]: time="2025-01-30T15:45:19.407393755Z" level=info msg="StartContainer for \"70aff660066e765a341d39af6ced39129ddc1e437a088dc7b825a96d98fb90f2\" returns successfully" Jan 30 15:45:19.843687 kubelet[1855]: E0130 15:45:19.843610 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:20.100052 kubelet[1855]: I0130 15:45:20.099720 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r62qc" podStartSLOduration=4.188586026 podStartE2EDuration="6.099677869s" podCreationTimestamp="2025-01-30 15:45:14 +0000 UTC" firstStartedPulling="2025-01-30 15:45:17.402709719 +0000 UTC m=+4.009801820" lastFinishedPulling="2025-01-30 15:45:19.313801572 +0000 UTC m=+5.920893663" observedRunningTime="2025-01-30 15:45:20.099310397 +0000 UTC m=+6.706402588" watchObservedRunningTime="2025-01-30 15:45:20.099677869 +0000 UTC m=+6.706770081" Jan 30 15:45:20.843875 kubelet[1855]: E0130 15:45:20.843783 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:21.844428 kubelet[1855]: E0130 15:45:21.844248 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:22.844663 kubelet[1855]: E0130 15:45:22.844554 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:23.845201 kubelet[1855]: E0130 15:45:23.845138 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:24.845328 kubelet[1855]: E0130 15:45:24.845239 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:25.475969 update_engine[1443]: I20250130 15:45:25.475303 1443 update_attempter.cc:509] Updating boot flags... Jan 30 15:45:25.512347 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2165) Jan 30 15:45:25.595579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2166) Jan 30 15:45:25.663498 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2166) Jan 30 15:45:25.846241 kubelet[1855]: E0130 15:45:25.846129 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:26.032404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977859784.mount: Deactivated successfully. Jan 30 15:45:26.847110 kubelet[1855]: E0130 15:45:26.847009 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:27.847210 kubelet[1855]: E0130 15:45:27.847157 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:28.848802 kubelet[1855]: E0130 15:45:28.848726 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:29.233311 containerd[1460]: time="2025-01-30T15:45:29.233079920Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:29.236232 containerd[1460]: time="2025-01-30T15:45:29.235685578Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 15:45:29.237867 containerd[1460]: time="2025-01-30T15:45:29.237792811Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:29.244885 containerd[1460]: time="2025-01-30T15:45:29.244740294Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.930147186s" Jan 30 15:45:29.245110 containerd[1460]: time="2025-01-30T15:45:29.245065070Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 15:45:29.251884 containerd[1460]: time="2025-01-30T15:45:29.251820936Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:45:29.284704 containerd[1460]: time="2025-01-30T15:45:29.284613528Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\"" Jan 30 15:45:29.286341 containerd[1460]: time="2025-01-30T15:45:29.286010296Z" level=info msg="StartContainer for \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\"" Jan 30 15:45:29.341516 systemd[1]: run-containerd-runc-k8s.io-20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d-runc.gaV4CY.mount: Deactivated successfully. Jan 30 15:45:29.353427 systemd[1]: Started cri-containerd-20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d.scope - libcontainer container 20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d. Jan 30 15:45:29.392574 containerd[1460]: time="2025-01-30T15:45:29.392442041Z" level=info msg="StartContainer for \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\" returns successfully" Jan 30 15:45:29.398177 systemd[1]: cri-containerd-20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d.scope: Deactivated successfully. Jan 30 15:45:29.850771 kubelet[1855]: E0130 15:45:29.850616 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:30.269520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d-rootfs.mount: Deactivated successfully. Jan 30 15:45:30.748834 containerd[1460]: time="2025-01-30T15:45:30.748593291Z" level=info msg="shim disconnected" id=20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d namespace=k8s.io Jan 30 15:45:30.749748 containerd[1460]: time="2025-01-30T15:45:30.748819529Z" level=warning msg="cleaning up after shim disconnected" id=20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d namespace=k8s.io Jan 30 15:45:30.749748 containerd[1460]: time="2025-01-30T15:45:30.748886042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:30.851444 kubelet[1855]: E0130 15:45:30.851366 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:31.053429 containerd[1460]: time="2025-01-30T15:45:31.052537885Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:45:31.618305 containerd[1460]: time="2025-01-30T15:45:31.618112586Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\"" Jan 30 15:45:31.619830 containerd[1460]: time="2025-01-30T15:45:31.618956197Z" level=info msg="StartContainer for \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\"" Jan 30 15:45:31.679659 systemd[1]: Started cri-containerd-5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c.scope - libcontainer container 5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c. Jan 30 15:45:31.757758 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:45:31.758335 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:45:31.758445 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:45:31.765997 containerd[1460]: time="2025-01-30T15:45:31.765741418Z" level=info msg="StartContainer for \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\" returns successfully" Jan 30 15:45:31.768885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:45:31.769232 systemd[1]: cri-containerd-5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c.scope: Deactivated successfully. Jan 30 15:45:31.798338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c-rootfs.mount: Deactivated successfully. Jan 30 15:45:31.808538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:45:31.852234 kubelet[1855]: E0130 15:45:31.852085 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:31.871955 containerd[1460]: time="2025-01-30T15:45:31.870591455Z" level=info msg="shim disconnected" id=5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c namespace=k8s.io Jan 30 15:45:31.871955 containerd[1460]: time="2025-01-30T15:45:31.870707625Z" level=warning msg="cleaning up after shim disconnected" id=5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c namespace=k8s.io Jan 30 15:45:31.871955 containerd[1460]: time="2025-01-30T15:45:31.870731476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:31.899067 containerd[1460]: time="2025-01-30T15:45:31.898899521Z" level=warning msg="cleanup warnings time=\"2025-01-30T15:45:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 15:45:32.060078 containerd[1460]: time="2025-01-30T15:45:32.059981851Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:45:32.204884 containerd[1460]: time="2025-01-30T15:45:32.204456527Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\"" Jan 30 15:45:32.206674 containerd[1460]: time="2025-01-30T15:45:32.206182966Z" level=info msg="StartContainer for \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\"" Jan 30 15:45:32.266653 systemd[1]: Started cri-containerd-7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b.scope - libcontainer container 7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b. Jan 30 15:45:32.326987 systemd[1]: cri-containerd-7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b.scope: Deactivated successfully. Jan 30 15:45:32.412229 containerd[1460]: time="2025-01-30T15:45:32.412072295Z" level=info msg="StartContainer for \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\" returns successfully" Jan 30 15:45:32.472143 containerd[1460]: time="2025-01-30T15:45:32.471583141Z" level=info msg="shim disconnected" id=7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b namespace=k8s.io Jan 30 15:45:32.472143 containerd[1460]: time="2025-01-30T15:45:32.471697115Z" level=warning msg="cleaning up after shim disconnected" id=7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b namespace=k8s.io Jan 30 15:45:32.472143 containerd[1460]: time="2025-01-30T15:45:32.471724083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:32.853012 kubelet[1855]: E0130 15:45:32.852794 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:33.066686 containerd[1460]: time="2025-01-30T15:45:33.066582216Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:45:33.099189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052497904.mount: Deactivated successfully. Jan 30 15:45:33.104715 containerd[1460]: time="2025-01-30T15:45:33.103965196Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\"" Jan 30 15:45:33.116134 containerd[1460]: time="2025-01-30T15:45:33.115768676Z" level=info msg="StartContainer for \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\"" Jan 30 15:45:33.174448 systemd[1]: Started cri-containerd-84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6.scope - libcontainer container 84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6. Jan 30 15:45:33.195497 systemd[1]: cri-containerd-84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6.scope: Deactivated successfully. Jan 30 15:45:33.200970 containerd[1460]: time="2025-01-30T15:45:33.200932350Z" level=info msg="StartContainer for \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\" returns successfully" Jan 30 15:45:33.223169 containerd[1460]: time="2025-01-30T15:45:33.223078514Z" level=info msg="shim disconnected" id=84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6 namespace=k8s.io Jan 30 15:45:33.223169 containerd[1460]: time="2025-01-30T15:45:33.223153083Z" level=warning msg="cleaning up after shim disconnected" id=84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6 namespace=k8s.io Jan 30 15:45:33.223169 containerd[1460]: time="2025-01-30T15:45:33.223165770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:45:33.492402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6-rootfs.mount: Deactivated successfully. Jan 30 15:45:33.839440 kubelet[1855]: E0130 15:45:33.839208 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:33.854082 kubelet[1855]: E0130 15:45:33.854012 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:34.076198 containerd[1460]: time="2025-01-30T15:45:34.076085157Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:45:34.134570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824832168.mount: Deactivated successfully. Jan 30 15:45:34.145986 containerd[1460]: time="2025-01-30T15:45:34.145883568Z" level=info msg="CreateContainer within sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\"" Jan 30 15:45:34.147462 containerd[1460]: time="2025-01-30T15:45:34.147374800Z" level=info msg="StartContainer for \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\"" Jan 30 15:45:34.207405 systemd[1]: Started cri-containerd-e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db.scope - libcontainer container e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db. Jan 30 15:45:34.237559 containerd[1460]: time="2025-01-30T15:45:34.237491873Z" level=info msg="StartContainer for \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\" returns successfully" Jan 30 15:45:34.398154 kubelet[1855]: I0130 15:45:34.397787 1855 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 15:45:34.783324 kernel: Initializing XFRM netlink socket Jan 30 15:45:34.855185 kubelet[1855]: E0130 15:45:34.855049 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:35.856380 kubelet[1855]: E0130 15:45:35.856305 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:36.551115 systemd-networkd[1371]: cilium_host: Link UP Jan 30 15:45:36.555471 systemd-networkd[1371]: cilium_net: Link UP Jan 30 15:45:36.558014 systemd-networkd[1371]: cilium_net: Gained carrier Jan 30 15:45:36.558548 systemd-networkd[1371]: cilium_host: Gained carrier Jan 30 15:45:36.630392 systemd-networkd[1371]: cilium_net: Gained IPv6LL Jan 30 15:45:36.702602 systemd-networkd[1371]: cilium_vxlan: Link UP Jan 30 15:45:36.702615 systemd-networkd[1371]: cilium_vxlan: Gained carrier Jan 30 15:45:36.857966 kubelet[1855]: E0130 15:45:36.857643 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:36.941516 systemd-networkd[1371]: cilium_host: Gained IPv6LL Jan 30 15:45:37.059345 kernel: NET: Registered PF_ALG protocol family Jan 30 15:45:37.859596 kubelet[1855]: E0130 15:45:37.858886 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:37.981502 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL Jan 30 15:45:38.104395 systemd-networkd[1371]: lxc_health: Link UP Jan 30 15:45:38.113728 systemd-networkd[1371]: lxc_health: Gained carrier Jan 30 15:45:38.253329 kubelet[1855]: I0130 15:45:38.253193 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m2xgh" podStartSLOduration=12.415094416 podStartE2EDuration="24.253171481s" podCreationTimestamp="2025-01-30 15:45:14 +0000 UTC" firstStartedPulling="2025-01-30 15:45:17.408955575 +0000 UTC m=+4.016047676" lastFinishedPulling="2025-01-30 15:45:29.2470326 +0000 UTC m=+15.854124741" observedRunningTime="2025-01-30 15:45:35.110654598 +0000 UTC m=+21.717746789" watchObservedRunningTime="2025-01-30 15:45:38.253171481 +0000 UTC m=+24.860263572" Jan 30 15:45:38.859708 kubelet[1855]: E0130 15:45:38.859606 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:39.861121 kubelet[1855]: E0130 15:45:39.860600 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:40.093503 systemd-networkd[1371]: lxc_health: Gained IPv6LL Jan 30 15:45:40.293518 kubelet[1855]: I0130 15:45:40.293462 1855 topology_manager.go:215] "Topology Admit Handler" podUID="4ca20ea9-f35d-4675-93a5-675cf5453d81" podNamespace="default" podName="nginx-deployment-85f456d6dd-hcw4s" Jan 30 15:45:40.310932 systemd[1]: Created slice kubepods-besteffort-pod4ca20ea9_f35d_4675_93a5_675cf5453d81.slice - libcontainer container kubepods-besteffort-pod4ca20ea9_f35d_4675_93a5_675cf5453d81.slice. Jan 30 15:45:40.354226 kubelet[1855]: I0130 15:45:40.354112 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlfm\" (UniqueName: \"kubernetes.io/projected/4ca20ea9-f35d-4675-93a5-675cf5453d81-kube-api-access-dxlfm\") pod \"nginx-deployment-85f456d6dd-hcw4s\" (UID: \"4ca20ea9-f35d-4675-93a5-675cf5453d81\") " pod="default/nginx-deployment-85f456d6dd-hcw4s" Jan 30 15:45:40.623388 containerd[1460]: time="2025-01-30T15:45:40.622292686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-hcw4s,Uid:4ca20ea9-f35d-4675-93a5-675cf5453d81,Namespace:default,Attempt:0,}" Jan 30 15:45:40.705913 systemd-networkd[1371]: lxcec49882b39a7: Link UP Jan 30 15:45:40.712481 kernel: eth0: renamed from tmp505bb Jan 30 15:45:40.717988 systemd-networkd[1371]: lxcec49882b39a7: Gained carrier Jan 30 15:45:40.862378 kubelet[1855]: E0130 15:45:40.861709 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:41.865161 kubelet[1855]: E0130 15:45:41.862853 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:42.333586 systemd-networkd[1371]: lxcec49882b39a7: Gained IPv6LL Jan 30 15:45:42.863878 kubelet[1855]: E0130 15:45:42.863784 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:43.614821 containerd[1460]: time="2025-01-30T15:45:43.614419026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:43.614821 containerd[1460]: time="2025-01-30T15:45:43.614580366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:43.614821 containerd[1460]: time="2025-01-30T15:45:43.614609195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:43.614821 containerd[1460]: time="2025-01-30T15:45:43.614707466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:43.655421 systemd[1]: Started cri-containerd-505bb83fc2e9f130fe25c59b700ebab1346e9c6c4defede01bfeb54a7c2abce9.scope - libcontainer container 505bb83fc2e9f130fe25c59b700ebab1346e9c6c4defede01bfeb54a7c2abce9. Jan 30 15:45:43.692928 containerd[1460]: time="2025-01-30T15:45:43.692894764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-hcw4s,Uid:4ca20ea9-f35d-4675-93a5-675cf5453d81,Namespace:default,Attempt:0,} returns sandbox id \"505bb83fc2e9f130fe25c59b700ebab1346e9c6c4defede01bfeb54a7c2abce9\"" Jan 30 15:45:43.695081 containerd[1460]: time="2025-01-30T15:45:43.694894681Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 15:45:43.866633 kubelet[1855]: E0130 15:45:43.865811 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:44.867006 kubelet[1855]: E0130 15:45:44.866937 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:45.867474 kubelet[1855]: E0130 15:45:45.867415 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:46.868161 kubelet[1855]: E0130 15:45:46.868077 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:47.550864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337677024.mount: Deactivated successfully. Jan 30 15:45:47.869403 kubelet[1855]: E0130 15:45:47.869349 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:48.870046 kubelet[1855]: E0130 15:45:48.869833 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:48.921910 containerd[1460]: time="2025-01-30T15:45:48.921693666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:48.924230 containerd[1460]: time="2025-01-30T15:45:48.924075784Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 15:45:48.925611 containerd[1460]: time="2025-01-30T15:45:48.925503466Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:48.932856 containerd[1460]: time="2025-01-30T15:45:48.932736686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:45:48.935710 containerd[1460]: time="2025-01-30T15:45:48.935437859Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.240500291s" Jan 30 15:45:48.935710 containerd[1460]: time="2025-01-30T15:45:48.935510666Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 15:45:48.941891 containerd[1460]: time="2025-01-30T15:45:48.941786524Z" level=info msg="CreateContainer within sandbox \"505bb83fc2e9f130fe25c59b700ebab1346e9c6c4defede01bfeb54a7c2abce9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 15:45:48.971826 containerd[1460]: time="2025-01-30T15:45:48.971745546Z" level=info msg="CreateContainer within sandbox \"505bb83fc2e9f130fe25c59b700ebab1346e9c6c4defede01bfeb54a7c2abce9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"849338e665b3285330b1cf15ab26feb62c7837ada7fde5a4f2f95bc16c0190a8\"" Jan 30 15:45:48.973201 containerd[1460]: time="2025-01-30T15:45:48.972974567Z" level=info msg="StartContainer for \"849338e665b3285330b1cf15ab26feb62c7837ada7fde5a4f2f95bc16c0190a8\"" Jan 30 15:45:48.973144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368998916.mount: Deactivated successfully. Jan 30 15:45:49.023430 systemd[1]: Started cri-containerd-849338e665b3285330b1cf15ab26feb62c7837ada7fde5a4f2f95bc16c0190a8.scope - libcontainer container 849338e665b3285330b1cf15ab26feb62c7837ada7fde5a4f2f95bc16c0190a8. Jan 30 15:45:49.050862 containerd[1460]: time="2025-01-30T15:45:49.050816599Z" level=info msg="StartContainer for \"849338e665b3285330b1cf15ab26feb62c7837ada7fde5a4f2f95bc16c0190a8\" returns successfully" Jan 30 15:45:49.149906 kubelet[1855]: I0130 15:45:49.147987 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-hcw4s" podStartSLOduration=3.904125706 podStartE2EDuration="9.14797132s" podCreationTimestamp="2025-01-30 15:45:40 +0000 UTC" firstStartedPulling="2025-01-30 15:45:43.6944632 +0000 UTC m=+30.301555301" lastFinishedPulling="2025-01-30 15:45:48.938308764 +0000 UTC m=+35.545400915" observedRunningTime="2025-01-30 15:45:49.147505001 +0000 UTC m=+35.754597172" watchObservedRunningTime="2025-01-30 15:45:49.14797132 +0000 UTC m=+35.755063421" Jan 30 15:45:49.871892 kubelet[1855]: E0130 15:45:49.871803 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:50.873063 kubelet[1855]: E0130 15:45:50.872927 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:51.874049 kubelet[1855]: E0130 15:45:51.873954 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:52.874688 kubelet[1855]: E0130 15:45:52.874580 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:53.838990 kubelet[1855]: E0130 15:45:53.838895 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:53.875129 kubelet[1855]: E0130 15:45:53.875053 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:54.876030 kubelet[1855]: E0130 15:45:54.875941 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:55.876820 kubelet[1855]: E0130 15:45:55.876736 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:56.027351 kubelet[1855]: I0130 15:45:56.027180 1855 topology_manager.go:215] "Topology Admit Handler" podUID="d60df04d-5f16-4c2b-b0b3-2fd338f982e0" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 15:45:56.041147 systemd[1]: Created slice kubepods-besteffort-podd60df04d_5f16_4c2b_b0b3_2fd338f982e0.slice - libcontainer container kubepods-besteffort-podd60df04d_5f16_4c2b_b0b3_2fd338f982e0.slice. Jan 30 15:45:56.168241 kubelet[1855]: I0130 15:45:56.167875 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d60df04d-5f16-4c2b-b0b3-2fd338f982e0-data\") pod \"nfs-server-provisioner-0\" (UID: \"d60df04d-5f16-4c2b-b0b3-2fd338f982e0\") " pod="default/nfs-server-provisioner-0" Jan 30 15:45:56.168241 kubelet[1855]: I0130 15:45:56.167956 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm8s8\" (UniqueName: \"kubernetes.io/projected/d60df04d-5f16-4c2b-b0b3-2fd338f982e0-kube-api-access-wm8s8\") pod \"nfs-server-provisioner-0\" (UID: \"d60df04d-5f16-4c2b-b0b3-2fd338f982e0\") " pod="default/nfs-server-provisioner-0" Jan 30 15:45:56.349843 containerd[1460]: time="2025-01-30T15:45:56.349710866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d60df04d-5f16-4c2b-b0b3-2fd338f982e0,Namespace:default,Attempt:0,}" Jan 30 15:45:56.418682 systemd-networkd[1371]: lxcd7515d55508c: Link UP Jan 30 15:45:56.432394 kernel: eth0: renamed from tmp2c3df Jan 30 15:45:56.440040 systemd-networkd[1371]: lxcd7515d55508c: Gained carrier Jan 30 15:45:56.808153 containerd[1460]: time="2025-01-30T15:45:56.807719171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:45:56.810457 containerd[1460]: time="2025-01-30T15:45:56.810063512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:45:56.810729 containerd[1460]: time="2025-01-30T15:45:56.810609690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:56.811251 containerd[1460]: time="2025-01-30T15:45:56.811147521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:45:56.842426 systemd[1]: Started cri-containerd-2c3dfd481bbc64cd15df9f7ea2221c614aeb2153cd0736090a2228d1d58f3e44.scope - libcontainer container 2c3dfd481bbc64cd15df9f7ea2221c614aeb2153cd0736090a2228d1d58f3e44. Jan 30 15:45:56.877244 kubelet[1855]: E0130 15:45:56.877173 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:56.882896 containerd[1460]: time="2025-01-30T15:45:56.882813378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d60df04d-5f16-4c2b-b0b3-2fd338f982e0,Namespace:default,Attempt:0,} returns sandbox id \"2c3dfd481bbc64cd15df9f7ea2221c614aeb2153cd0736090a2228d1d58f3e44\"" Jan 30 15:45:56.884657 containerd[1460]: time="2025-01-30T15:45:56.884580468Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 15:45:57.877470 kubelet[1855]: E0130 15:45:57.877359 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:58.461614 systemd-networkd[1371]: lxcd7515d55508c: Gained IPv6LL Jan 30 15:45:58.877784 kubelet[1855]: E0130 15:45:58.877726 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:59.878386 kubelet[1855]: E0130 15:45:59.878343 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:45:59.959061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983959895.mount: Deactivated successfully. Jan 30 15:46:00.879351 kubelet[1855]: E0130 15:46:00.879317 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:01.880733 kubelet[1855]: E0130 15:46:01.880700 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:02.306815 containerd[1460]: time="2025-01-30T15:46:02.305953627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:02.308789 containerd[1460]: time="2025-01-30T15:46:02.308667269Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 30 15:46:02.310436 containerd[1460]: time="2025-01-30T15:46:02.310242068Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:02.317766 containerd[1460]: time="2025-01-30T15:46:02.317620655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:02.320553 containerd[1460]: time="2025-01-30T15:46:02.320490196Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.435862405s" Jan 30 15:46:02.320904 containerd[1460]: time="2025-01-30T15:46:02.320714441Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 15:46:02.328296 containerd[1460]: time="2025-01-30T15:46:02.328185070Z" level=info msg="CreateContainer within sandbox \"2c3dfd481bbc64cd15df9f7ea2221c614aeb2153cd0736090a2228d1d58f3e44\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 15:46:02.354703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094436702.mount: Deactivated successfully. Jan 30 15:46:02.357798 containerd[1460]: time="2025-01-30T15:46:02.357695955Z" level=info msg="CreateContainer within sandbox \"2c3dfd481bbc64cd15df9f7ea2221c614aeb2153cd0736090a2228d1d58f3e44\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"412e8ea007a6cada54bafd2746e4b39b5b03a3da8b3fc6d001bf30b3b0ffeb7d\"" Jan 30 15:46:02.358759 containerd[1460]: time="2025-01-30T15:46:02.358615946Z" level=info msg="StartContainer for \"412e8ea007a6cada54bafd2746e4b39b5b03a3da8b3fc6d001bf30b3b0ffeb7d\"" Jan 30 15:46:02.419429 systemd[1]: Started cri-containerd-412e8ea007a6cada54bafd2746e4b39b5b03a3da8b3fc6d001bf30b3b0ffeb7d.scope - libcontainer container 412e8ea007a6cada54bafd2746e4b39b5b03a3da8b3fc6d001bf30b3b0ffeb7d. Jan 30 15:46:02.450280 containerd[1460]: time="2025-01-30T15:46:02.450215222Z" level=info msg="StartContainer for \"412e8ea007a6cada54bafd2746e4b39b5b03a3da8b3fc6d001bf30b3b0ffeb7d\" returns successfully" Jan 30 15:46:02.881690 kubelet[1855]: E0130 15:46:02.881578 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:03.255168 kubelet[1855]: I0130 15:46:03.254289 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.814781822 podStartE2EDuration="7.254112246s" podCreationTimestamp="2025-01-30 15:45:56 +0000 UTC" firstStartedPulling="2025-01-30 15:45:56.884333295 +0000 UTC m=+43.491425386" lastFinishedPulling="2025-01-30 15:46:02.323663669 +0000 UTC m=+48.930755810" observedRunningTime="2025-01-30 15:46:03.253879846 +0000 UTC m=+49.860971998" watchObservedRunningTime="2025-01-30 15:46:03.254112246 +0000 UTC m=+49.861204387" Jan 30 15:46:03.882669 kubelet[1855]: E0130 15:46:03.882562 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:04.882747 kubelet[1855]: E0130 15:46:04.882688 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:05.883770 kubelet[1855]: E0130 15:46:05.883712 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:06.884527 kubelet[1855]: E0130 15:46:06.884424 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:07.885293 kubelet[1855]: E0130 15:46:07.885153 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:08.885733 kubelet[1855]: E0130 15:46:08.885649 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:09.886145 kubelet[1855]: E0130 15:46:09.886054 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:10.886343 kubelet[1855]: E0130 15:46:10.886233 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:11.887448 kubelet[1855]: E0130 15:46:11.887343 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:12.874585 kubelet[1855]: I0130 15:46:12.874525 1855 topology_manager.go:215] "Topology Admit Handler" podUID="e37a35f4-ce5c-477b-9278-c49e891f4086" podNamespace="default" podName="test-pod-1" Jan 30 15:46:12.887828 kubelet[1855]: E0130 15:46:12.887739 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:12.891820 systemd[1]: Created slice kubepods-besteffort-pode37a35f4_ce5c_477b_9278_c49e891f4086.slice - libcontainer container kubepods-besteffort-pode37a35f4_ce5c_477b_9278_c49e891f4086.slice. Jan 30 15:46:12.988947 kubelet[1855]: I0130 15:46:12.988837 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49ef3a09-5744-4c68-ba1d-432f90450742\" (UniqueName: \"kubernetes.io/nfs/e37a35f4-ce5c-477b-9278-c49e891f4086-pvc-49ef3a09-5744-4c68-ba1d-432f90450742\") pod \"test-pod-1\" (UID: \"e37a35f4-ce5c-477b-9278-c49e891f4086\") " pod="default/test-pod-1" Jan 30 15:46:12.988947 kubelet[1855]: I0130 15:46:12.988923 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqxz\" (UniqueName: \"kubernetes.io/projected/e37a35f4-ce5c-477b-9278-c49e891f4086-kube-api-access-2hqxz\") pod \"test-pod-1\" (UID: \"e37a35f4-ce5c-477b-9278-c49e891f4086\") " pod="default/test-pod-1" Jan 30 15:46:13.160365 kernel: FS-Cache: Loaded Jan 30 15:46:13.253808 kernel: RPC: Registered named UNIX socket transport module. Jan 30 15:46:13.253951 kernel: RPC: Registered udp transport module. Jan 30 15:46:13.253985 kernel: RPC: Registered tcp transport module. Jan 30 15:46:13.254014 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 15:46:13.254576 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 15:46:13.592472 kernel: NFS: Registering the id_resolver key type Jan 30 15:46:13.592688 kernel: Key type id_resolver registered Jan 30 15:46:13.592717 kernel: Key type id_legacy registered Jan 30 15:46:13.639651 nfsidmap[3246]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 30 15:46:13.648097 nfsidmap[3247]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jan 30 15:46:13.801587 containerd[1460]: time="2025-01-30T15:46:13.801168399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e37a35f4-ce5c-477b-9278-c49e891f4086,Namespace:default,Attempt:0,}" Jan 30 15:46:13.844661 kubelet[1855]: E0130 15:46:13.839354 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:13.884774 systemd-networkd[1371]: lxc4697750d4b8b: Link UP Jan 30 15:46:13.888362 kubelet[1855]: E0130 15:46:13.888314 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:13.896305 kernel: eth0: renamed from tmp3621d Jan 30 15:46:13.903446 systemd-networkd[1371]: lxc4697750d4b8b: Gained carrier Jan 30 15:46:14.167336 containerd[1460]: time="2025-01-30T15:46:14.167016068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:14.167507 containerd[1460]: time="2025-01-30T15:46:14.167222834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:14.167507 containerd[1460]: time="2025-01-30T15:46:14.167281620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:14.167507 containerd[1460]: time="2025-01-30T15:46:14.167394902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:14.197627 systemd[1]: Started cri-containerd-3621db473f6892b1772153ac28db28e9e1435f42fa27f8bc6be71783e6c977b6.scope - libcontainer container 3621db473f6892b1772153ac28db28e9e1435f42fa27f8bc6be71783e6c977b6. Jan 30 15:46:14.248180 containerd[1460]: time="2025-01-30T15:46:14.248113790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e37a35f4-ce5c-477b-9278-c49e891f4086,Namespace:default,Attempt:0,} returns sandbox id \"3621db473f6892b1772153ac28db28e9e1435f42fa27f8bc6be71783e6c977b6\"" Jan 30 15:46:14.250354 containerd[1460]: time="2025-01-30T15:46:14.250008123Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 15:46:14.692481 containerd[1460]: time="2025-01-30T15:46:14.692065492Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:14.694447 containerd[1460]: time="2025-01-30T15:46:14.694239435Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 15:46:14.703436 containerd[1460]: time="2025-01-30T15:46:14.703354291Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 453.30161ms" Jan 30 15:46:14.703921 containerd[1460]: time="2025-01-30T15:46:14.703709289Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 15:46:14.708826 containerd[1460]: time="2025-01-30T15:46:14.708725568Z" level=info msg="CreateContainer within sandbox \"3621db473f6892b1772153ac28db28e9e1435f42fa27f8bc6be71783e6c977b6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 15:46:14.756101 containerd[1460]: time="2025-01-30T15:46:14.755662706Z" level=info msg="CreateContainer within sandbox \"3621db473f6892b1772153ac28db28e9e1435f42fa27f8bc6be71783e6c977b6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e29c2a8af1dc3d7179aecedbb26eb514c5fa994d6363453bc578ef3d4e8078ac\"" Jan 30 15:46:14.757347 containerd[1460]: time="2025-01-30T15:46:14.757203013Z" level=info msg="StartContainer for \"e29c2a8af1dc3d7179aecedbb26eb514c5fa994d6363453bc578ef3d4e8078ac\"" Jan 30 15:46:14.806474 systemd[1]: Started cri-containerd-e29c2a8af1dc3d7179aecedbb26eb514c5fa994d6363453bc578ef3d4e8078ac.scope - libcontainer container e29c2a8af1dc3d7179aecedbb26eb514c5fa994d6363453bc578ef3d4e8078ac. Jan 30 15:46:14.846423 containerd[1460]: time="2025-01-30T15:46:14.846377871Z" level=info msg="StartContainer for \"e29c2a8af1dc3d7179aecedbb26eb514c5fa994d6363453bc578ef3d4e8078ac\" returns successfully" Jan 30 15:46:14.889096 kubelet[1855]: E0130 15:46:14.889039 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:15.289924 kubelet[1855]: I0130 15:46:15.289824 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.833896601 podStartE2EDuration="17.2897928s" podCreationTimestamp="2025-01-30 15:45:58 +0000 UTC" firstStartedPulling="2025-01-30 15:46:14.249715709 +0000 UTC m=+60.856807810" lastFinishedPulling="2025-01-30 15:46:14.705611858 +0000 UTC m=+61.312704009" observedRunningTime="2025-01-30 15:46:15.289299341 +0000 UTC m=+61.896391512" watchObservedRunningTime="2025-01-30 15:46:15.2897928 +0000 UTC m=+61.896884942" Jan 30 15:46:15.549665 systemd-networkd[1371]: lxc4697750d4b8b: Gained IPv6LL Jan 30 15:46:15.890220 kubelet[1855]: E0130 15:46:15.890130 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:16.890486 kubelet[1855]: E0130 15:46:16.890354 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:17.890833 kubelet[1855]: E0130 15:46:17.890708 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:18.891903 kubelet[1855]: E0130 15:46:18.891771 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:19.892627 kubelet[1855]: E0130 15:46:19.892527 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:20.893087 kubelet[1855]: E0130 15:46:20.892983 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:21.893923 kubelet[1855]: E0130 15:46:21.893811 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:22.894652 kubelet[1855]: E0130 15:46:22.894559 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:23.895227 kubelet[1855]: E0130 15:46:23.895044 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:24.896137 kubelet[1855]: E0130 15:46:24.896015 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:25.896751 kubelet[1855]: E0130 15:46:25.896641 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:26.553156 containerd[1460]: time="2025-01-30T15:46:26.553019340Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:46:26.571872 containerd[1460]: time="2025-01-30T15:46:26.571793393Z" level=info msg="StopContainer for \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\" with timeout 2 (s)" Jan 30 15:46:26.573036 containerd[1460]: time="2025-01-30T15:46:26.572937763Z" level=info msg="Stop container \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\" with signal terminated" Jan 30 15:46:26.587007 systemd-networkd[1371]: lxc_health: Link DOWN Jan 30 15:46:26.587024 systemd-networkd[1371]: lxc_health: Lost carrier Jan 30 15:46:26.604971 systemd[1]: cri-containerd-e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db.scope: Deactivated successfully. Jan 30 15:46:26.605463 systemd[1]: cri-containerd-e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db.scope: Consumed 8.993s CPU time. Jan 30 15:46:26.636624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db-rootfs.mount: Deactivated successfully. Jan 30 15:46:26.897908 kubelet[1855]: E0130 15:46:26.897759 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:27.897984 kubelet[1855]: E0130 15:46:27.897891 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:28.279382 containerd[1460]: time="2025-01-30T15:46:28.279092338Z" level=info msg="shim disconnected" id=e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db namespace=k8s.io Jan 30 15:46:28.279382 containerd[1460]: time="2025-01-30T15:46:28.279217208Z" level=warning msg="cleaning up after shim disconnected" id=e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db namespace=k8s.io Jan 30 15:46:28.279382 containerd[1460]: time="2025-01-30T15:46:28.279251471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:28.326507 containerd[1460]: time="2025-01-30T15:46:28.326286843Z" level=info msg="StopContainer for \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\" returns successfully" Jan 30 15:46:28.327558 containerd[1460]: time="2025-01-30T15:46:28.327452429Z" level=info msg="StopPodSandbox for \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\"" Jan 30 15:46:28.327558 containerd[1460]: time="2025-01-30T15:46:28.327543708Z" level=info msg="Container to stop \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:28.327741 containerd[1460]: time="2025-01-30T15:46:28.327577058Z" level=info msg="Container to stop \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:28.327741 containerd[1460]: time="2025-01-30T15:46:28.327602866Z" level=info msg="Container to stop \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:28.327741 containerd[1460]: time="2025-01-30T15:46:28.327627532Z" level=info msg="Container to stop \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:28.327741 containerd[1460]: time="2025-01-30T15:46:28.327653099Z" level=info msg="Container to stop \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:28.332535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e-shm.mount: Deactivated successfully. Jan 30 15:46:28.343451 systemd[1]: cri-containerd-786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e.scope: Deactivated successfully. Jan 30 15:46:28.385766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e-rootfs.mount: Deactivated successfully. Jan 30 15:46:28.391707 containerd[1460]: time="2025-01-30T15:46:28.391499225Z" level=info msg="shim disconnected" id=786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e namespace=k8s.io Jan 30 15:46:28.391707 containerd[1460]: time="2025-01-30T15:46:28.391566189Z" level=warning msg="cleaning up after shim disconnected" id=786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e namespace=k8s.io Jan 30 15:46:28.391707 containerd[1460]: time="2025-01-30T15:46:28.391577709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:28.406313 containerd[1460]: time="2025-01-30T15:46:28.405590924Z" level=warning msg="cleanup warnings time=\"2025-01-30T15:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 15:46:28.406942 containerd[1460]: time="2025-01-30T15:46:28.406916184Z" level=info msg="TearDown network for sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" successfully" Jan 30 15:46:28.407035 containerd[1460]: time="2025-01-30T15:46:28.407014385Z" level=info msg="StopPodSandbox for \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" returns successfully" Jan 30 15:46:28.509564 kubelet[1855]: I0130 15:46:28.509245 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2006ec0f-f993-4556-96ae-a863921f36b0-clustermesh-secrets\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.509564 kubelet[1855]: I0130 15:46:28.509305 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-etc-cni-netd\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.509564 kubelet[1855]: I0130 15:46:28.509324 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-xtables-lock\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.509564 kubelet[1855]: I0130 15:46:28.509361 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-cgroup\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.509564 kubelet[1855]: I0130 15:46:28.509384 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-config-path\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.509564 kubelet[1855]: I0130 15:46:28.509401 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cni-path\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510064 kubelet[1855]: I0130 15:46:28.509418 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-lib-modules\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510064 kubelet[1855]: I0130 15:46:28.509434 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-bpf-maps\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510064 kubelet[1855]: I0130 15:46:28.509451 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-net\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510064 kubelet[1855]: I0130 15:46:28.509468 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-kernel\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510064 kubelet[1855]: I0130 15:46:28.509487 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-hubble-tls\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510064 kubelet[1855]: I0130 15:46:28.509507 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wbjz\" (UniqueName: \"kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-kube-api-access-9wbjz\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510465 kubelet[1855]: I0130 15:46:28.509526 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-hostproc\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510465 kubelet[1855]: I0130 15:46:28.509542 1855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-run\") pod \"2006ec0f-f993-4556-96ae-a863921f36b0\" (UID: \"2006ec0f-f993-4556-96ae-a863921f36b0\") " Jan 30 15:46:28.510465 kubelet[1855]: I0130 15:46:28.509622 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.512323 kubelet[1855]: I0130 15:46:28.510756 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.512323 kubelet[1855]: I0130 15:46:28.510824 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.512323 kubelet[1855]: I0130 15:46:28.510895 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.512323 kubelet[1855]: I0130 15:46:28.510930 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.512657 kubelet[1855]: I0130 15:46:28.512391 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2006ec0f-f993-4556-96ae-a863921f36b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:46:28.512657 kubelet[1855]: I0130 15:46:28.512544 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.512657 kubelet[1855]: I0130 15:46:28.512596 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.512847 kubelet[1855]: I0130 15:46:28.512638 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.517903 systemd[1]: var-lib-kubelet-pods-2006ec0f\x2df993\x2d4556\x2d96ae\x2da863921f36b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 15:46:28.521377 kubelet[1855]: I0130 15:46:28.519000 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.523329 kubelet[1855]: I0130 15:46:28.523170 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:28.529076 systemd[1]: var-lib-kubelet-pods-2006ec0f\x2df993\x2d4556\x2d96ae\x2da863921f36b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 15:46:28.529594 kubelet[1855]: I0130 15:46:28.529186 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:46:28.529594 kubelet[1855]: I0130 15:46:28.529427 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:46:28.535400 kubelet[1855]: I0130 15:46:28.534490 1855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-kube-api-access-9wbjz" (OuterVolumeSpecName: "kube-api-access-9wbjz") pod "2006ec0f-f993-4556-96ae-a863921f36b0" (UID: "2006ec0f-f993-4556-96ae-a863921f36b0"). InnerVolumeSpecName "kube-api-access-9wbjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:46:28.539610 systemd[1]: var-lib-kubelet-pods-2006ec0f\x2df993\x2d4556\x2d96ae\x2da863921f36b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9wbjz.mount: Deactivated successfully. Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.610860 1855 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2006ec0f-f993-4556-96ae-a863921f36b0-clustermesh-secrets\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.610923 1855 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-etc-cni-netd\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.610947 1855 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-xtables-lock\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.610969 1855 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-cgroup\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.610991 1855 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-config-path\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.611026 1855 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cni-path\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.611047 1855 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-lib-modules\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611229 kubelet[1855]: I0130 15:46:28.611067 1855 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-bpf-maps\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611992 kubelet[1855]: I0130 15:46:28.611087 1855 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-net\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611992 kubelet[1855]: I0130 15:46:28.611108 1855 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-host-proc-sys-kernel\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611992 kubelet[1855]: I0130 15:46:28.611129 1855 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-hubble-tls\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611992 kubelet[1855]: I0130 15:46:28.611149 1855 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9wbjz\" (UniqueName: \"kubernetes.io/projected/2006ec0f-f993-4556-96ae-a863921f36b0-kube-api-access-9wbjz\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611992 kubelet[1855]: I0130 15:46:28.611169 1855 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-hostproc\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.611992 kubelet[1855]: I0130 15:46:28.611191 1855 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2006ec0f-f993-4556-96ae-a863921f36b0-cilium-run\") on node \"172.24.4.74\" DevicePath \"\"" Jan 30 15:46:28.899154 kubelet[1855]: E0130 15:46:28.899064 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:28.996863 kubelet[1855]: E0130 15:46:28.996770 1855 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:46:29.327481 kubelet[1855]: I0130 15:46:29.326105 1855 scope.go:117] "RemoveContainer" containerID="e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db" Jan 30 15:46:29.332619 containerd[1460]: time="2025-01-30T15:46:29.331807118Z" level=info msg="RemoveContainer for \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\"" Jan 30 15:46:29.340954 containerd[1460]: time="2025-01-30T15:46:29.340874491Z" level=info msg="RemoveContainer for \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\" returns successfully" Jan 30 15:46:29.341642 systemd[1]: Removed slice kubepods-burstable-pod2006ec0f_f993_4556_96ae_a863921f36b0.slice - libcontainer container kubepods-burstable-pod2006ec0f_f993_4556_96ae_a863921f36b0.slice. Jan 30 15:46:29.342146 kubelet[1855]: I0130 15:46:29.341629 1855 scope.go:117] "RemoveContainer" containerID="84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6" Jan 30 15:46:29.342659 systemd[1]: kubepods-burstable-pod2006ec0f_f993_4556_96ae_a863921f36b0.slice: Consumed 9.111s CPU time. Jan 30 15:46:29.346776 containerd[1460]: time="2025-01-30T15:46:29.346693163Z" level=info msg="RemoveContainer for \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\"" Jan 30 15:46:29.355233 containerd[1460]: time="2025-01-30T15:46:29.355129363Z" level=info msg="RemoveContainer for \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\" returns successfully" Jan 30 15:46:29.355903 kubelet[1855]: I0130 15:46:29.355702 1855 scope.go:117] "RemoveContainer" containerID="7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b" Jan 30 15:46:29.358923 containerd[1460]: time="2025-01-30T15:46:29.358461459Z" level=info msg="RemoveContainer for \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\"" Jan 30 15:46:29.366006 containerd[1460]: time="2025-01-30T15:46:29.365930073Z" level=info msg="RemoveContainer for \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\" returns successfully" Jan 30 15:46:29.367001 kubelet[1855]: I0130 15:46:29.366670 1855 scope.go:117] "RemoveContainer" containerID="5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c" Jan 30 15:46:29.370124 containerd[1460]: time="2025-01-30T15:46:29.369944067Z" level=info msg="RemoveContainer for \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\"" Jan 30 15:46:29.377161 containerd[1460]: time="2025-01-30T15:46:29.377010910Z" level=info msg="RemoveContainer for \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\" returns successfully" Jan 30 15:46:29.377990 kubelet[1855]: I0130 15:46:29.377471 1855 scope.go:117] "RemoveContainer" containerID="20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d" Jan 30 15:46:29.380326 containerd[1460]: time="2025-01-30T15:46:29.380072998Z" level=info msg="RemoveContainer for \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\"" Jan 30 15:46:29.388287 containerd[1460]: time="2025-01-30T15:46:29.387631879Z" level=info msg="RemoveContainer for \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\" returns successfully" Jan 30 15:46:29.389937 kubelet[1855]: I0130 15:46:29.389902 1855 scope.go:117] "RemoveContainer" containerID="e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db" Jan 30 15:46:29.390842 containerd[1460]: time="2025-01-30T15:46:29.390768474Z" level=error msg="ContainerStatus for \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\": not found" Jan 30 15:46:29.391204 kubelet[1855]: E0130 15:46:29.391166 1855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\": not found" containerID="e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db" Jan 30 15:46:29.391745 kubelet[1855]: I0130 15:46:29.391367 1855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db"} err="failed to get container status \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\": rpc error: code = NotFound desc = an error occurred when try to find container \"e80ee387257ccd730bbfd2a059fefd943e60b71d2a12bd6f6877f1048de471db\": not found" Jan 30 15:46:29.391745 kubelet[1855]: I0130 15:46:29.391541 1855 scope.go:117] "RemoveContainer" containerID="84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6" Jan 30 15:46:29.391934 containerd[1460]: time="2025-01-30T15:46:29.391869636Z" level=error msg="ContainerStatus for \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\": not found" Jan 30 15:46:29.393353 kubelet[1855]: E0130 15:46:29.393319 1855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\": not found" containerID="84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6" Jan 30 15:46:29.393971 kubelet[1855]: I0130 15:46:29.393548 1855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6"} err="failed to get container status \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\": rpc error: code = NotFound desc = an error occurred when try to find container \"84c59b9e94ec089f67edd5e0f279b218863c89edf9458a564f31e724810dade6\": not found" Jan 30 15:46:29.393971 kubelet[1855]: I0130 15:46:29.393586 1855 scope.go:117] "RemoveContainer" containerID="7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b" Jan 30 15:46:29.394102 containerd[1460]: time="2025-01-30T15:46:29.393859285Z" level=error msg="ContainerStatus for \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\": not found" Jan 30 15:46:29.394336 kubelet[1855]: E0130 15:46:29.394166 1855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\": not found" containerID="7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b" Jan 30 15:46:29.394523 kubelet[1855]: I0130 15:46:29.394366 1855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b"} err="failed to get container status \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f4fdd079c6dc12c46e9419a7403a1c0762f24e5e7bed99e6677aecf1f62c01b\": not found" Jan 30 15:46:29.394603 kubelet[1855]: I0130 15:46:29.394561 1855 scope.go:117] "RemoveContainer" containerID="5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c" Jan 30 15:46:29.395422 containerd[1460]: time="2025-01-30T15:46:29.395209375Z" level=error msg="ContainerStatus for \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\": not found" Jan 30 15:46:29.395706 kubelet[1855]: E0130 15:46:29.395637 1855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\": not found" containerID="5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c" Jan 30 15:46:29.395771 kubelet[1855]: I0130 15:46:29.395716 1855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c"} err="failed to get container status \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5eed5899dc48ba600045e55b59b6effc9ef90e7ff118ff45515d11971168291c\": not found" Jan 30 15:46:29.395771 kubelet[1855]: I0130 15:46:29.395755 1855 scope.go:117] "RemoveContainer" containerID="20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d" Jan 30 15:46:29.396380 containerd[1460]: time="2025-01-30T15:46:29.396057319Z" level=error msg="ContainerStatus for \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\": not found" Jan 30 15:46:29.396476 kubelet[1855]: E0130 15:46:29.396241 1855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\": not found" containerID="20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d" Jan 30 15:46:29.396476 kubelet[1855]: I0130 15:46:29.396309 1855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d"} err="failed to get container status \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"20f6afd7e110834190fee0b47473f65f60d9305987c4a9ed5af337ff415c6b6d\": not found" Jan 30 15:46:29.900118 kubelet[1855]: E0130 15:46:29.900047 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:29.973919 kubelet[1855]: I0130 15:46:29.973835 1855 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" path="/var/lib/kubelet/pods/2006ec0f-f993-4556-96ae-a863921f36b0/volumes" Jan 30 15:46:30.377317 kubelet[1855]: I0130 15:46:30.376519 1855 topology_manager.go:215] "Topology Admit Handler" podUID="b1b2e131-ad20-466d-9ec4-bbac07485ae1" podNamespace="kube-system" podName="cilium-operator-599987898-5fwwv" Jan 30 15:46:30.377317 kubelet[1855]: E0130 15:46:30.376613 1855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" containerName="cilium-agent" Jan 30 15:46:30.377317 kubelet[1855]: E0130 15:46:30.376637 1855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" containerName="mount-cgroup" Jan 30 15:46:30.377317 kubelet[1855]: E0130 15:46:30.376652 1855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" containerName="apply-sysctl-overwrites" Jan 30 15:46:30.377317 kubelet[1855]: E0130 15:46:30.376671 1855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" containerName="mount-bpf-fs" Jan 30 15:46:30.377317 kubelet[1855]: E0130 15:46:30.376692 1855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" containerName="clean-cilium-state" Jan 30 15:46:30.377317 kubelet[1855]: I0130 15:46:30.376753 1855 memory_manager.go:354] "RemoveStaleState removing state" podUID="2006ec0f-f993-4556-96ae-a863921f36b0" containerName="cilium-agent" Jan 30 15:46:30.377317 kubelet[1855]: I0130 15:46:30.376973 1855 topology_manager.go:215] "Topology Admit Handler" podUID="002280dc-5ffc-4dce-976d-2e7940e53bd8" podNamespace="kube-system" podName="cilium-t2svg" Jan 30 15:46:30.394770 systemd[1]: Created slice kubepods-burstable-pod002280dc_5ffc_4dce_976d_2e7940e53bd8.slice - libcontainer container kubepods-burstable-pod002280dc_5ffc_4dce_976d_2e7940e53bd8.slice. Jan 30 15:46:30.427005 kubelet[1855]: I0130 15:46:30.425579 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1b2e131-ad20-466d-9ec4-bbac07485ae1-cilium-config-path\") pod \"cilium-operator-599987898-5fwwv\" (UID: \"b1b2e131-ad20-466d-9ec4-bbac07485ae1\") " pod="kube-system/cilium-operator-599987898-5fwwv" Jan 30 15:46:30.427005 kubelet[1855]: I0130 15:46:30.425670 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-hostproc\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.427005 kubelet[1855]: I0130 15:46:30.425720 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-cni-path\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.427005 kubelet[1855]: I0130 15:46:30.425763 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-etc-cni-netd\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.427005 kubelet[1855]: I0130 15:46:30.425813 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-host-proc-sys-kernel\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.427744 kubelet[1855]: I0130 15:46:30.425861 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvn6p\" (UniqueName: \"kubernetes.io/projected/b1b2e131-ad20-466d-9ec4-bbac07485ae1-kube-api-access-rvn6p\") pod \"cilium-operator-599987898-5fwwv\" (UID: \"b1b2e131-ad20-466d-9ec4-bbac07485ae1\") " pod="kube-system/cilium-operator-599987898-5fwwv" Jan 30 15:46:30.427744 kubelet[1855]: I0130 15:46:30.425907 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-bpf-maps\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.427744 kubelet[1855]: I0130 15:46:30.425948 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-lib-modules\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.427744 kubelet[1855]: I0130 15:46:30.426001 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-host-proc-sys-net\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.427744 kubelet[1855]: I0130 15:46:30.426046 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-xtables-lock\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.428053 kubelet[1855]: I0130 15:46:30.426087 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/002280dc-5ffc-4dce-976d-2e7940e53bd8-clustermesh-secrets\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.428053 kubelet[1855]: I0130 15:46:30.426136 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/002280dc-5ffc-4dce-976d-2e7940e53bd8-cilium-config-path\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.428053 kubelet[1855]: I0130 15:46:30.426179 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/002280dc-5ffc-4dce-976d-2e7940e53bd8-hubble-tls\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.428053 kubelet[1855]: I0130 15:46:30.426224 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcz8z\" (UniqueName: \"kubernetes.io/projected/002280dc-5ffc-4dce-976d-2e7940e53bd8-kube-api-access-wcz8z\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.428053 kubelet[1855]: I0130 15:46:30.426316 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-cilium-run\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.428053 kubelet[1855]: I0130 15:46:30.426363 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/002280dc-5ffc-4dce-976d-2e7940e53bd8-cilium-cgroup\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.428468 kubelet[1855]: I0130 15:46:30.426410 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/002280dc-5ffc-4dce-976d-2e7940e53bd8-cilium-ipsec-secrets\") pod \"cilium-t2svg\" (UID: \"002280dc-5ffc-4dce-976d-2e7940e53bd8\") " pod="kube-system/cilium-t2svg" Jan 30 15:46:30.433201 systemd[1]: Created slice kubepods-besteffort-podb1b2e131_ad20_466d_9ec4_bbac07485ae1.slice - libcontainer container kubepods-besteffort-podb1b2e131_ad20_466d_9ec4_bbac07485ae1.slice. Jan 30 15:46:30.724071 containerd[1460]: time="2025-01-30T15:46:30.722956765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2svg,Uid:002280dc-5ffc-4dce-976d-2e7940e53bd8,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:30.741324 containerd[1460]: time="2025-01-30T15:46:30.741086315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-5fwwv,Uid:b1b2e131-ad20-466d-9ec4-bbac07485ae1,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:30.798449 containerd[1460]: time="2025-01-30T15:46:30.796241381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:30.798449 containerd[1460]: time="2025-01-30T15:46:30.796319115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:30.798449 containerd[1460]: time="2025-01-30T15:46:30.796334152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:30.798449 containerd[1460]: time="2025-01-30T15:46:30.796550793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:30.824516 systemd[1]: Started cri-containerd-470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d.scope - libcontainer container 470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d. Jan 30 15:46:30.832538 containerd[1460]: time="2025-01-30T15:46:30.829446629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:30.832538 containerd[1460]: time="2025-01-30T15:46:30.829517821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:30.832538 containerd[1460]: time="2025-01-30T15:46:30.829532398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:30.832538 containerd[1460]: time="2025-01-30T15:46:30.829620951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:30.856507 systemd[1]: Started cri-containerd-08e682d37ca868708f10584c420eb3fb330d69623a0eb0d3ebbf80b23f346319.scope - libcontainer container 08e682d37ca868708f10584c420eb3fb330d69623a0eb0d3ebbf80b23f346319. Jan 30 15:46:30.863723 containerd[1460]: time="2025-01-30T15:46:30.863357482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2svg,Uid:002280dc-5ffc-4dce-976d-2e7940e53bd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\"" Jan 30 15:46:30.867891 containerd[1460]: time="2025-01-30T15:46:30.867720591Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:46:30.892907 containerd[1460]: time="2025-01-30T15:46:30.892860692Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6\"" Jan 30 15:46:30.894335 containerd[1460]: time="2025-01-30T15:46:30.894014403Z" level=info msg="StartContainer for \"e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6\"" Jan 30 15:46:30.901015 kubelet[1855]: E0130 15:46:30.900977 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:30.906046 containerd[1460]: time="2025-01-30T15:46:30.905973855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-5fwwv,Uid:b1b2e131-ad20-466d-9ec4-bbac07485ae1,Namespace:kube-system,Attempt:0,} returns sandbox id \"08e682d37ca868708f10584c420eb3fb330d69623a0eb0d3ebbf80b23f346319\"" Jan 30 15:46:30.908609 containerd[1460]: time="2025-01-30T15:46:30.908419674Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 15:46:30.939469 systemd[1]: Started cri-containerd-e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6.scope - libcontainer container e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6. Jan 30 15:46:30.978322 containerd[1460]: time="2025-01-30T15:46:30.977515191Z" level=info msg="StartContainer for \"e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6\" returns successfully" Jan 30 15:46:30.979401 systemd[1]: cri-containerd-e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6.scope: Deactivated successfully. Jan 30 15:46:31.024452 containerd[1460]: time="2025-01-30T15:46:31.024365400Z" level=info msg="shim disconnected" id=e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6 namespace=k8s.io Jan 30 15:46:31.024452 containerd[1460]: time="2025-01-30T15:46:31.024443875Z" level=warning msg="cleaning up after shim disconnected" id=e14e681d382df294e4f3b834a8e5413cef961f62402811c55cf8bbfe011751f6 namespace=k8s.io Jan 30 15:46:31.024452 containerd[1460]: time="2025-01-30T15:46:31.024455597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:31.343242 containerd[1460]: time="2025-01-30T15:46:31.342539769Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:46:31.370819 containerd[1460]: time="2025-01-30T15:46:31.370615530Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff\"" Jan 30 15:46:31.372434 containerd[1460]: time="2025-01-30T15:46:31.372079138Z" level=info msg="StartContainer for \"e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff\"" Jan 30 15:46:31.433769 systemd[1]: Started cri-containerd-e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff.scope - libcontainer container e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff. Jan 30 15:46:31.479905 containerd[1460]: time="2025-01-30T15:46:31.479851341Z" level=info msg="StartContainer for \"e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff\" returns successfully" Jan 30 15:46:31.483561 systemd[1]: cri-containerd-e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff.scope: Deactivated successfully. Jan 30 15:46:31.522213 containerd[1460]: time="2025-01-30T15:46:31.522060911Z" level=info msg="shim disconnected" id=e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff namespace=k8s.io Jan 30 15:46:31.522213 containerd[1460]: time="2025-01-30T15:46:31.522125852Z" level=warning msg="cleaning up after shim disconnected" id=e313ad86bba3eaec68f719255723e0dcf0c77bf3ae34915f16c9fd5da7ac2bff namespace=k8s.io Jan 30 15:46:31.522213 containerd[1460]: time="2025-01-30T15:46:31.522141460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:31.902086 kubelet[1855]: E0130 15:46:31.901984 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:32.353305 containerd[1460]: time="2025-01-30T15:46:32.352746852Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:46:32.396635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060084781.mount: Deactivated successfully. Jan 30 15:46:32.403079 containerd[1460]: time="2025-01-30T15:46:32.402834312Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e\"" Jan 30 15:46:32.404890 containerd[1460]: time="2025-01-30T15:46:32.404767473Z" level=info msg="StartContainer for \"838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e\"" Jan 30 15:46:32.469572 systemd[1]: Started cri-containerd-838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e.scope - libcontainer container 838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e. Jan 30 15:46:32.504470 systemd[1]: cri-containerd-838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e.scope: Deactivated successfully. Jan 30 15:46:32.506735 containerd[1460]: time="2025-01-30T15:46:32.506692535Z" level=info msg="StartContainer for \"838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e\" returns successfully" Jan 30 15:46:32.535683 containerd[1460]: time="2025-01-30T15:46:32.535516960Z" level=info msg="shim disconnected" id=838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e namespace=k8s.io Jan 30 15:46:32.535683 containerd[1460]: time="2025-01-30T15:46:32.535577552Z" level=warning msg="cleaning up after shim disconnected" id=838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e namespace=k8s.io Jan 30 15:46:32.535683 containerd[1460]: time="2025-01-30T15:46:32.535588763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:32.539409 systemd[1]: run-containerd-runc-k8s.io-838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e-runc.h0W0IR.mount: Deactivated successfully. Jan 30 15:46:32.539519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-838bb925aca5cdbd7060d0398aa061215f8bcd62131671e021757dbb9a98c57e-rootfs.mount: Deactivated successfully. Jan 30 15:46:32.903007 kubelet[1855]: E0130 15:46:32.902954 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:33.359916 containerd[1460]: time="2025-01-30T15:46:33.359714202Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:46:33.400095 containerd[1460]: time="2025-01-30T15:46:33.399844899Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0\"" Jan 30 15:46:33.402138 containerd[1460]: time="2025-01-30T15:46:33.402038980Z" level=info msg="StartContainer for \"c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0\"" Jan 30 15:46:33.463633 systemd[1]: Started cri-containerd-c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0.scope - libcontainer container c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0. Jan 30 15:46:33.488351 systemd[1]: cri-containerd-c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0.scope: Deactivated successfully. Jan 30 15:46:33.494722 containerd[1460]: time="2025-01-30T15:46:33.494117280Z" level=info msg="StartContainer for \"c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0\" returns successfully" Jan 30 15:46:33.537562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0-rootfs.mount: Deactivated successfully. Jan 30 15:46:33.539540 containerd[1460]: time="2025-01-30T15:46:33.539447513Z" level=info msg="shim disconnected" id=c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0 namespace=k8s.io Jan 30 15:46:33.540024 containerd[1460]: time="2025-01-30T15:46:33.539747149Z" level=warning msg="cleaning up after shim disconnected" id=c423de3e589a4596f77f67e1d63c4ebca255f353a81a4fb7d6c4a41b79f82bb0 namespace=k8s.io Jan 30 15:46:33.540024 containerd[1460]: time="2025-01-30T15:46:33.539785341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:33.839076 kubelet[1855]: E0130 15:46:33.839000 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:33.904053 kubelet[1855]: E0130 15:46:33.903958 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:33.998882 kubelet[1855]: E0130 15:46:33.998636 1855 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:46:34.367442 containerd[1460]: time="2025-01-30T15:46:34.367365129Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:46:34.411414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231678120.mount: Deactivated successfully. Jan 30 15:46:34.415922 containerd[1460]: time="2025-01-30T15:46:34.414128240Z" level=info msg="CreateContainer within sandbox \"470eb2cbce4a489309168f1ccef62a7dd17e05bfd309aee14752600cd59f6d1d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"418efa9cd4957a6e8ca67d3c06bf8eb8b9bb15ade7ab491a7da8a744e5d25f75\"" Jan 30 15:46:34.415922 containerd[1460]: time="2025-01-30T15:46:34.415491232Z" level=info msg="StartContainer for \"418efa9cd4957a6e8ca67d3c06bf8eb8b9bb15ade7ab491a7da8a744e5d25f75\"" Jan 30 15:46:34.476460 systemd[1]: Started cri-containerd-418efa9cd4957a6e8ca67d3c06bf8eb8b9bb15ade7ab491a7da8a744e5d25f75.scope - libcontainer container 418efa9cd4957a6e8ca67d3c06bf8eb8b9bb15ade7ab491a7da8a744e5d25f75. Jan 30 15:46:34.527764 containerd[1460]: time="2025-01-30T15:46:34.527693163Z" level=info msg="StartContainer for \"418efa9cd4957a6e8ca67d3c06bf8eb8b9bb15ade7ab491a7da8a744e5d25f75\" returns successfully" Jan 30 15:46:34.538091 systemd[1]: run-containerd-runc-k8s.io-418efa9cd4957a6e8ca67d3c06bf8eb8b9bb15ade7ab491a7da8a744e5d25f75-runc.DE8VQZ.mount: Deactivated successfully. Jan 30 15:46:34.904584 kubelet[1855]: E0130 15:46:34.904523 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:34.931441 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 15:46:34.992415 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 30 15:46:35.405774 kubelet[1855]: I0130 15:46:35.405677 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t2svg" podStartSLOduration=5.405650601 podStartE2EDuration="5.405650601s" podCreationTimestamp="2025-01-30 15:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:35.404406497 +0000 UTC m=+82.011498688" watchObservedRunningTime="2025-01-30 15:46:35.405650601 +0000 UTC m=+82.012742742" Jan 30 15:46:35.905708 kubelet[1855]: E0130 15:46:35.905638 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:36.004841 kubelet[1855]: I0130 15:46:36.004742 1855 setters.go:580] "Node became not ready" node="172.24.4.74" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T15:46:36Z","lastTransitionTime":"2025-01-30T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 15:46:36.176600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242180915.mount: Deactivated successfully. Jan 30 15:46:36.905886 kubelet[1855]: E0130 15:46:36.905810 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:36.980075 containerd[1460]: time="2025-01-30T15:46:36.979268640Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:36.981651 containerd[1460]: time="2025-01-30T15:46:36.981613438Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 15:46:36.982920 containerd[1460]: time="2025-01-30T15:46:36.982895427Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:46:36.984756 containerd[1460]: time="2025-01-30T15:46:36.984183796Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.075634514s" Jan 30 15:46:36.984756 containerd[1460]: time="2025-01-30T15:46:36.984659512Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 15:46:36.987571 containerd[1460]: time="2025-01-30T15:46:36.987414314Z" level=info msg="CreateContainer within sandbox \"08e682d37ca868708f10584c420eb3fb330d69623a0eb0d3ebbf80b23f346319\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 15:46:37.010473 containerd[1460]: time="2025-01-30T15:46:37.010432262Z" level=info msg="CreateContainer within sandbox \"08e682d37ca868708f10584c420eb3fb330d69623a0eb0d3ebbf80b23f346319\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3af35bbcb141049597cdc054609716fd25949fff75280cb8047b833d2bbff436\"" Jan 30 15:46:37.012376 containerd[1460]: time="2025-01-30T15:46:37.011456883Z" level=info msg="StartContainer for \"3af35bbcb141049597cdc054609716fd25949fff75280cb8047b833d2bbff436\"" Jan 30 15:46:37.048469 systemd[1]: Started cri-containerd-3af35bbcb141049597cdc054609716fd25949fff75280cb8047b833d2bbff436.scope - libcontainer container 3af35bbcb141049597cdc054609716fd25949fff75280cb8047b833d2bbff436. Jan 30 15:46:37.166447 containerd[1460]: time="2025-01-30T15:46:37.166344626Z" level=info msg="StartContainer for \"3af35bbcb141049597cdc054609716fd25949fff75280cb8047b833d2bbff436\" returns successfully" Jan 30 15:46:37.415030 kubelet[1855]: I0130 15:46:37.414945 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-5fwwv" podStartSLOduration=1.336731935 podStartE2EDuration="7.414925023s" podCreationTimestamp="2025-01-30 15:46:30 +0000 UTC" firstStartedPulling="2025-01-30 15:46:30.907823593 +0000 UTC m=+77.514915694" lastFinishedPulling="2025-01-30 15:46:36.986016671 +0000 UTC m=+83.593108782" observedRunningTime="2025-01-30 15:46:37.414866984 +0000 UTC m=+84.021959095" watchObservedRunningTime="2025-01-30 15:46:37.414925023 +0000 UTC m=+84.022017124" Jan 30 15:46:37.906828 kubelet[1855]: E0130 15:46:37.906786 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:38.139759 systemd-networkd[1371]: lxc_health: Link UP Jan 30 15:46:38.143696 systemd-networkd[1371]: lxc_health: Gained carrier Jan 30 15:46:38.907341 kubelet[1855]: E0130 15:46:38.907276 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:39.908327 kubelet[1855]: E0130 15:46:39.908208 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:40.061521 systemd-networkd[1371]: lxc_health: Gained IPv6LL Jan 30 15:46:40.909237 kubelet[1855]: E0130 15:46:40.909159 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:41.910813 kubelet[1855]: E0130 15:46:41.909706 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:42.779972 systemd[1]: run-containerd-runc-k8s.io-418efa9cd4957a6e8ca67d3c06bf8eb8b9bb15ade7ab491a7da8a744e5d25f75-runc.kmu3Ni.mount: Deactivated successfully. Jan 30 15:46:42.910578 kubelet[1855]: E0130 15:46:42.910522 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:43.911247 kubelet[1855]: E0130 15:46:43.911174 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:44.911837 kubelet[1855]: E0130 15:46:44.911752 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:45.912860 kubelet[1855]: E0130 15:46:45.912698 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:46.913901 kubelet[1855]: E0130 15:46:46.913801 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:47.914506 kubelet[1855]: E0130 15:46:47.914419 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:48.914987 kubelet[1855]: E0130 15:46:48.914895 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:49.915469 kubelet[1855]: E0130 15:46:49.915386 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:50.916498 kubelet[1855]: E0130 15:46:50.916367 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:51.917579 kubelet[1855]: E0130 15:46:51.917498 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:52.918455 kubelet[1855]: E0130 15:46:52.918302 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:53.838653 kubelet[1855]: E0130 15:46:53.838558 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:53.919491 kubelet[1855]: E0130 15:46:53.919392 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:54.920013 kubelet[1855]: E0130 15:46:54.919842 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:55.920956 kubelet[1855]: E0130 15:46:55.920871 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:56.921924 kubelet[1855]: E0130 15:46:56.921835 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:57.922999 kubelet[1855]: E0130 15:46:57.922887 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:58.923704 kubelet[1855]: E0130 15:46:58.923627 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:46:59.924873 kubelet[1855]: E0130 15:46:59.924762 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:00.925117 kubelet[1855]: E0130 15:47:00.924939 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:01.926319 kubelet[1855]: E0130 15:47:01.926219 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:02.926823 kubelet[1855]: E0130 15:47:02.926743 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:03.486509 update_engine[1443]: I20250130 15:47:03.485872 1443 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 15:47:03.486509 update_engine[1443]: I20250130 15:47:03.485925 1443 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 15:47:03.486509 update_engine[1443]: I20250130 15:47:03.486117 1443 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.486716 1443 omaha_request_params.cc:62] Current group set to lts Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.486856 1443 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.486871 1443 update_attempter.cc:643] Scheduling an action processor start. Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.486891 1443 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.486929 1443 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.487003 1443 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.487017 1443 omaha_request_action.cc:272] Request: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: Jan 30 15:47:03.487179 update_engine[1443]: I20250130 15:47:03.487025 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:47:03.489455 update_engine[1443]: I20250130 15:47:03.489049 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:47:03.489522 update_engine[1443]: I20250130 15:47:03.489459 1443 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:47:03.489758 locksmithd[1464]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 15:47:03.503304 update_engine[1443]: E20250130 15:47:03.503196 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:47:03.503483 update_engine[1443]: I20250130 15:47:03.503347 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 15:47:03.927975 kubelet[1855]: E0130 15:47:03.927894 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:04.928741 kubelet[1855]: E0130 15:47:04.928663 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:05.929970 kubelet[1855]: E0130 15:47:05.929890 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:06.930771 kubelet[1855]: E0130 15:47:06.930592 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:07.931860 kubelet[1855]: E0130 15:47:07.931770 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:08.932767 kubelet[1855]: E0130 15:47:08.932704 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:09.933564 kubelet[1855]: E0130 15:47:09.933438 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:10.934764 kubelet[1855]: E0130 15:47:10.934609 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:11.935350 kubelet[1855]: E0130 15:47:11.935218 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:12.935715 kubelet[1855]: E0130 15:47:12.935626 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:13.479149 update_engine[1443]: I20250130 15:47:13.478999 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:47:13.479742 update_engine[1443]: I20250130 15:47:13.479414 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:47:13.479807 update_engine[1443]: I20250130 15:47:13.479748 1443 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:47:13.490763 update_engine[1443]: E20250130 15:47:13.490653 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:47:13.490977 update_engine[1443]: I20250130 15:47:13.490777 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 15:47:13.839183 kubelet[1855]: E0130 15:47:13.839009 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:13.891496 containerd[1460]: time="2025-01-30T15:47:13.891355989Z" level=info msg="StopPodSandbox for \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\"" Jan 30 15:47:13.893587 containerd[1460]: time="2025-01-30T15:47:13.891635535Z" level=info msg="TearDown network for sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" successfully" Jan 30 15:47:13.893587 containerd[1460]: time="2025-01-30T15:47:13.891682445Z" level=info msg="StopPodSandbox for \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" returns successfully" Jan 30 15:47:13.893587 containerd[1460]: time="2025-01-30T15:47:13.892767140Z" level=info msg="RemovePodSandbox for \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\"" Jan 30 15:47:13.893587 containerd[1460]: time="2025-01-30T15:47:13.892834920Z" level=info msg="Forcibly stopping sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\"" Jan 30 15:47:13.893587 containerd[1460]: time="2025-01-30T15:47:13.892979526Z" level=info msg="TearDown network for sandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" successfully" Jan 30 15:47:13.903422 containerd[1460]: time="2025-01-30T15:47:13.903245012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:47:13.903647 containerd[1460]: time="2025-01-30T15:47:13.903444393Z" level=info msg="RemovePodSandbox \"786119ceb20921a6c33feaa751ac8a7c1d04d82c1339ac0577d1738efb4d733e\" returns successfully" Jan 30 15:47:13.936041 kubelet[1855]: E0130 15:47:13.935946 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:14.937190 kubelet[1855]: E0130 15:47:14.937112 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:15.937784 kubelet[1855]: E0130 15:47:15.937540 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:16.938132 kubelet[1855]: E0130 15:47:16.938021 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:17.938510 kubelet[1855]: E0130 15:47:17.938437 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:18.939705 kubelet[1855]: E0130 15:47:18.939638 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 15:47:19.940501 kubelet[1855]: E0130 15:47:19.940394 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"